The latest issue of Astronomy & Geophysics includes an article by your truly about the GitHub/.Astronomy Hack Day at the UK’s National Astronomy Meeting in Portsmouth earlier this year.

The projects resulting from hack days are often prototypes, or proof-of-concept ideas that are meant to grow and expand later. Often they are simply written up and shared online for someone else to take on if they wish. This ethos of sharing and openness was evident at the NAM hack day, when people would periodically stand up and shout to the room, asking for anyone with skills in a particular area, or access to specific hardware.

Take a look here: http://astrogeo.oxfordjournals.org/content/55/4/4.15.full?keytype=ref&ijkey=kkvGWSg3ABbIy5S

Martian Nyan Cat

Martian Nyan Cat

I got into a conversation recently about how some astronomical photos can totally change your whole perspective of yourself and your place in the Universe. There’s several images that come to mind right away – here are my own favourites:

1. The Milky Way (from a very dark location)

Milky Way

Seeing the night sky from a dark site is something most people don’t do very often, now that most of us live in cities. The vision of the Milky Way overhead can be startling, and a pair of binoculars make it more so; revealing that its delicate structure is made of millions of stars. This long-exposure photo of the dust lanes in our galaxy [1] is our first image that can really change your perspective on yourself and your place in the cosmos.

2. Earthshine on a crescent moon

Young Crescent Moon with Earthshine

When the Moon is just a thin crescent in the evening sky you can often see the rest of its face, dimly lit, and slightly reddened. This part of the Moon is not being illuminated by the Sun, like the crescent shape itself, but rather by the reflection of light from the Earth where the Sun has not yet gone down over the horizon. You’re seeing other people’s daylight, bounced back at you from around the world [2][3].

3. Aurora and lightning from the ISS

Sometimes a change in perspective can be quite literal – as with this video of the Earth seen from the International Space Station. The green structures are aurora- the Northern Lights over Canada in this case. You can also catch the occasional flash of lightning. This time-lapse is haunting and shows you a view you could probably never otherwise see.

4. M31 compared to a full moon

m31abtpmoon

The Andromeda Galaxy is our nearest neighbouring galaxy and can be seen as a faint fuzzy patch in the Northern Sky. What is amazing though, is to realise that in fact it is quite a large object – bigger than our own Moon in our sky. Out eyes just don’t see it very well! Long-exposure images show just how big it really is. Combine this with the fact that it is 200 million light years away [4] and you begin to realise that the galaxy next door is truly enormous. It’s about the same shape, size, and type as our own Milky Way too. So we will look pretty similar to anyone looking up at the sky from a planet in the Andromeda galaxy.

5. Earth from Saturn (and other places)

PIA17172

There are perhaps no images quite as humbling and shifting as the set of images we would probably call the ‘pale blue dots’. These are the small set of mages of the Earth from far, far away taken by the robots we have sent out into the Solar System. Voyager 1 took one in 1990 from 4.7 billion light years away; Cassini has taken more than one from the Saturnian system (like the one above); a few have been taken from Mars too. All of them show the Earth as just a pixel or so across: encompassing all of humanity, the world, and all life as we know it into a teeny tiny speck against the cosmos.

6. Orion’s Proplyds

Orion_Nebula_proplyd_atlas

These dark blobs hidden within the star-forming complex of the Orion nebula are known as proplyds – or protoplanetary disks. These are embryonic solar systems in the making. Each of these blobs is far larger than our own Solar System (they get smaller as they evolve into spinning orbits) which gives you some idea as to how large the Orion Nebula is in total. We were once shrouded in such a dusty blob ourselves – though long before the Earth formed.

7. The Sloan Great Wall

SUTU_59

The largest surveys of galaxies reveal a structure in the Universe so vast that is practically beyond comprehension – but let’s try anyway shall we? The Sloan Great Wall is a filament of galaxies, snaking through the Universe that appear to be physically connected to each other – bound by gravity. The ‘wall’ is 1.38 billion light years across. That’s 1/67th of the observable Universe! When light is emitted on one side it doesn’t reach the other end for 1.38 billion years. It is 1,600 times a long as the distance between the Milky Way and Andromeda. I told you it was hard to imagine.

8. Apollo 8 on Christmas Eve 1968

AS8-14-2383HR

I thought it would be good to end on something a little closer to home. On December 24th 1968 astronauts Bill Anders, Jim Lovell, and Frank Borman were the voices heard on one of the most-watched television broadcast of all time. As they read passages from the Bible’s Book of Genesis, they broadcast a grainy image of the Earth, as seen from the orbit of the Moon. The world watched themselves from space for the first time, and saw the Earth as a singular marble, set against the deep black of space. The image has since been remastered and still represents an era, and a moment in human history, that many find totally perspective changing. A symbol of a race of beings from a tiny planet, venturing outward to explore space and the worlds beyond their own. Remarkable.


[1] I recently had my first go at some proper astrophotography from a dark site. My target was the Milky Way and the result was this image of the dust lanes of our galaxy toward the centre of the galaxy. I’m pretty happy with it for a first go.

[2] This effect can also be seen on other moons around other planets and is generically called ‘Planetshine‘.

[3] This also serves as a good reminder that there is a part of the Moon we never see – the far side – which is lit by the Sun, but just never seen from Earth.

[4] That distance gets smaller all the time, and Andromeda will actually collide with us in about 4 billion years.

publications

Executable papers are a cool idea in research [1]. You take a study, write it up as a paper and bundle together all your code, scripts and analysis in such a way that other people can take the ‘paper’ and run in themselves. This has three main attractive features, as I see it:

  1. It provides transparency for other researchers and allows everyone to run through your working to follow along step-by-step.
  2. It allows your peers to give you detailed feedback and ideas for improvements – or do the improvements themselves
  3. It allows others to take your work and try it out on their own data

The main problem is that these don’t really exist ‘in the wild’, and where they do they’re in bespoke formats even if they’re open source. iPython Notebook is a great way of doing something very much like an executable paper, for example. Another way would be to bundle up a virtual machine and share a disk image. Executable papers would allow for rapid-turnaround science to happen. For example, let’s imagine that you create a study and use some current data to form a theory or model. You do an analysis and create an executable paper. You store that paper in a library and the library periodically reruns the study when new data become available [2]. The library might be a university library server, or maybe it’s something like the arXiv, ePrints, or GitHub.

This is roughly what happens in some very competitive fields of science already – only with humans. Researchers write papers using simulated data and the instant they can access the anticipated data the import, run and publish. With observations of the Cosmic Microwave Background (CMB) it is the case that several competing researchers are waiting to work on the data – and new data come sour very rarely. In fact that day after the Planck CMB data was released last year, there was a flurry of papers submitted to the arXiv. Those who got in early, likely had pre-written much of the work and simply ran their code as soon as they had downloaded and parsed new, published data.

If executable papers could be left alone to scan the literature for new, useful data then they could also look for new results from each other. A set of executable papers could work together, without planning, to create new hypotheses and new understanding of the world. Whilst one paper crunches new environmental data, processing it into a catalogue, another could use the new catalogue to update climate change models and even automatically publish significant changes or new potential impacts for the economy.

I should be possible to make predictions in executable papers and have them automatically check for certain observational data and automatically republish updated results. So one can imagine a topical astronomy example where the BICEP2 results would be automatically checked against any released Planck data and then create new publications when statistical tests are met. Someone should do this if they haven’t already. In this way, papers can continue to further, or verify, our understanding long after publication.

SKA Rendering (Wikimedia Commons)

SKA Rendering (Wikimedia Commons)

This is high-frequency science [3], akin to high-frequency trading, and it seems like an interesting approach to some upcoming data-flow issues in science. The Large Hadron Collider (LHC), Large Synoptic Survey Telescope) LSST, and Square Kilometre Array (SKA) are all huge scientific instruments set to explore new parts o the universe and gathering huge volumes of data to be analysed.

Even the deployment of Zooniverse-scale citizen science cannot get around the fact that instruments like the SKA will create volumes of data that we don’t know what to do with, at a pace we’ve never seen before. I wonder if executable papers, set to scour the SKA servers for new data, could alleviate part of the issue by automatically searching for theorised trends. The papers would be sourced by the whole community, and peer-reviewed as is done today, effectively crowdsourcing the hypotheses through publications. This cloud of interconnected, virtual researchers, would continuously generate analyses that could be verified by some second peer-review process; since one would expect a great deal of nonsense in such a setup.

When this came up at a meeting the other day, Kevin Page (OeRC) remarked that we might just be describing sensors. In a way he’s right – but these are software sensors, built on the platform and infrastructure of the scientific community. They’re more like advanced tools; a set of ghost researchers, left to think about an idea in perpetuity, in service of the community that created them.

I’ve no idea if I’m describing anything real here – of it’s just an expression of way of partially automating the process of science. The idea stuck with me and I found myself writing about it to flesh it out – thus here is a blog post – and wondering how to code something like it. Maybe you have a notion too. If so, get in touch!

———-

[1] But not a new one really. It did come up again at a recent Social Machines meeting though, hence this post.
[2] David De Roure outlined this idea quite casually in a meeting the other day, I’ve no ice air it’s his or just something he’s heard a lot and thought was quite cool.
[3] This phrasing isn’t mine, but as soon as I heard it, I loved it. The whole room got chatting about this very quickly so provenance was lost I’m afraid.

Warning: 400 words of geekery ahead!

I’ve embarked on an extremely nerdy and wonderful new project: a podcast about rewatching Star Trek. Each week we encourage listeners to watch the same episode we have, and then we’ll dissect and discuss it in deliciously geeky detail.

My cohost in this trek beyond the podcasting frontier is friend and fellow Zooniverse workhorse Grant Miller. Grant sometimes fills in for Chris Lintott on our regular astronomy/science series Recycled Electrons. The other week myself and Grant ended up talking about Star Trek on the show, and a friend remarked that they’d totally listen to us doing a podcast about Star Trek. We’re easily persuaded by flattery and so ‘Star Trek: The Rewatch’ was born.

The Rewatch

Our first episode is now up, in which we discuss Encounter at Farpoint – the pilot episode of Star Trek: The Next Generation. We’re hoping that re-watching the show, in order, will make us appreciate it all over again. When you a TV show like this, you get to enjoy each episode in the context of knowing the series and characters really well, and there’s loads of interesting trivia and back-story that is great to explore. Grant and I are both astrophysics PhDs, so we’re also hoping to bring some serious science talk to the show from time to time. We know lots of experts in various fields from around the University (of Oxford) who we hope can pop in and comment every now and then.

Although I’m obviously looking forward to some of my favourite episodes (e.g. Best of Both Worlds, Tapestry) I’m also keen to see how some of the older, or more obscure, episodes hold up to the ensuing decades and changes in the way we enjoy Sci Fi, and TV in general. This is a podcast that I would totally have listened to – so it’s going to be fun to record it. To be honest, even if no one listens, this is going to be awesome! That’s how geeky I am.

If you loved the adventures of Captain Picard and co. – and want to watch them all over again – then join us! Check out startrek.therewatch.com or find us on iTunes. We’re also to be found on Twitter @StarTrekRewatch and on Facebook too.

Engage!

Yesterday was the Hack Day at the UK National Astronomy Meeting 2014 in Portsmouth. I organised it with my good friend Arfon Smith of GitHub, formerly Zooniverse. We wanted to try and start a new NAM tradition – it went well so maybe we did. I’m psyched that .Astronomy got to help make it happen – not just through my involvement, but the many .Astronomy alumni who attended!
Some of the hack projects have already started to appear online, such as Geert Barentsen, Jeremy Harwood, and Leigh Smith (Hertfordshire) who created a Martian Nyan Cat, which is planning to fly over the entirety of ESA’s Mars Express data archive in one continuous, two-day-long, flight. You also grab the code for Duncan Forgan’s beautiful ‘Music of the Spheres’ project, which sonifies the rhythms of planetary systems. Other projects are harder to place online, such as Jane Greaves’ knitted galaxy cluster – with dark matter contributed by many people during the hack day itself.

I spent much of the day working with Edward Gomez (LCOGT) on the littleBits Space Kit. littleBits is a modular system of circuits that let anyone try their hand at something that ordinarily requires a soldering iron. littleBits components may be switches, sensors, servos, or anything really, and they connect magnetically to create deceptively simple circuits that can be quite powerful.

IMG_2359

For example you could connect an infrared sensor and an LED to make a device that flashes when you press buttons on your remote. Or you could use a microphone and a digital LCD display to create a sound meter. The littleBits components are sturdy enough to withstand being bashed about a bit, and simple, and large enough, to let you stick on cardboard, homemade figures, or anything else you find around the house. I found out about littleBits when I met their creator, Aya Bdier at TED in March. She is a fellow TED Fellow.

We decided fairly quickly to try and built an exoplanet simulator of some sort and ended up crating the littleBits Exoplanet Detector (and cup orrery). There were two parts to this: a cup-based orrery, and a transit detector.

The cup orrery consisted of a rotating ‘planetary system’ fashioned from a coffee cup mounted on a simple motor component – we only had hack day supplies to play with – and a central LED ‘star’. Some more cups and stirrers were required to scaffold the system into a stable state but it was soon working.

The transit detector used a light-sensor component that read out to both a speaker and a LCD numerical display – Ed refers to this as the laser display board. With a bit of shielding from the buffet’s handy, black, plastic plates the light sensor can easily pick up the LED and you can see the light intensity readout varying as the the paper planet passes in front of the star. It was awesome. We got very excited when it actually worked!

You might think that was geeky enough, but it gets better. I realised I could use my iPhone 5s – which has a high-frame-rate video mode – to record the model working in slow motion and allow us to better see the digital readout. We also realised that the littleBits speaker component can accept an audio jack and so could use the phone to feed in a pure tone, which made it much easier to hear the pulsing dips of the transits.

Finally, we had the idea to record this nice, tonal sound output from the detector and create waveforms to see if we could recover any properties about the exoplanets. And sure enough: we can! We built several different coffee-cup planetary systems (including a big planet, small planet, and twin planets) and their different properties are visible in their waveforms. Ed is planning a more rigorous exploration of this at a later date, but you can see and hear the large cup planet’s waveform below.

Waveform for Large Cup Planet

Waveform for Large Cup Planet

So if you want to try something like this, you only need the littleBits Space Kit. You can buy them online and I’d love to see more of these kits, and to see them in schools. I’m now totally addicted to the idea myself too!

GitHub Stickers

Thanks to Arfon for suggesting that we do this Hack Day together; to the NAM 2014 Portsmouth team for being so supportive; and to GitHub for sponsoring it – where else would we have gotten all the cups?!

Today is the start of the UK National Meeting in Portsmouth. I’ll be there tomorrow, and running the NAM Hack Day on Wednesday with Arfon Smith – which is going to be awesome. Today at NAM, the nation’s astronomers will discuss the case for UK involvement in the Large Synoptic Survey Telescope project – the LSST. The LSST is a huge telescope, and a massive undertaking. It will change astronomy in a profound way.

A photograph and a rendering mix of the exterior building showing the dome open and road leading away from the site.

A photograph and a rendering mix of the exterior LSST building, showing the dome open and road leading away from the site.

With every image it takes, the LSST will be able to record very a large patch of sky (~50 times the size of the full Moon). It will take more than 800 images each night and can image its* entire sky twice a week! Billions of galaxies, stars, and solar system objects will be seen for the first time and monitored over a period of 10 years. Crucially it will use it’s rapid-imaging power to look for moving or ‘transient’ things in the night sky. It will be an excellent tool for detecting supernova, asteroids, exoplanets and more of the things that move from night-to-night or week-to-week. For example, the LSST could be used to detect and track potentially hazardous asteroids that might impact the Earth. It will also help us understand dark energy – the mysterious force that seems to keep our universe expanding – by mapping the precise location of billions of galaxies.

I’ve recently become LSST:UK’s Public Data Coordinator - think ‘chief hacker’ if you prefer. The LSST’s unprecedented archive of data will be a resource we can tap into to create new kinds of public outreach tools, data visualisations, and citizen science. In recent years, we at the Zooniverse have pioneered citizen science investigations of data in astronomy**. The citizen science and amateur astronomy communities around the UK, and the world, will be able to access the amazing data that comes out of the LSST both through structure, Zooniverse-style projects but also in a more freeform manner. The potential for discovery will be on a scale we haven’t seen before. It’s very exciting.

The LSST is a public-private partnership and is led by the United States. The unique scientific opportunities presented by the LSST have led to the formation of a group of astronomers from more than 30 UK universities. We’ll be asking for funding from the Science and Technology Facilities Council to support UK participation in the project.

Spinnaker Tower from the Gosport Ferry

Spinnaker Tower from the Gosport Ferry

If you’re at NAM this week, then I’ve love to talk about LSST, hacking on data, and Zooniverse. On Wednesday you’ll find me in the Park Building, at the University of Portsmouth at the GitHub/.Astronomy NAM 2014 Hack Day. I’ll also be at the GitHub drink up on Tuesday night at The White Swan from 7pm – where you can enjoy some of the finest cask ales, draught beers and wines in Portsmouth – and GitHub are paying! More details at https://github.com/blog/1849-github-meetup-in-portsmouth-uk.

* i.e. the sky visible from its location – not literally the entire sky
** We’ve now had more than 1 million volunteers pass through our digital doors.

Operation War Diary Screenshot

Working at the Zooniverse means that I get to indulge many of my interests beyond astronomy, like history. In January we launched a project in partnership with the Imperial War Museum and the National Archives called Operation War Diary. It’s a ‘citizen history’ site that asks the public to tag and transcribe more than one million war diaries, and other handwritten notes, produced on the western front during the First World War. 2014 is the centenary year of the start of the war and we hope that this project will recover information that had been all but lost over the last one hundred years.

The results of this project are starting to appear now. The project is meant to run for several years, but there are already new ways to explore and understand the data thanks to effort of the tens of thousands of people who have taken part in Operation War Diary. As an example, the video below is an animation of the casualties reported in the diaries tagged so far. You can also see this map online at http://cdb.io/1pqB4kp.

This map was created by Operation War Diary developer Jim O’Donnell and it’s not a final map by any means but it shows the power of crowdsourcing these kinds of tasks. If that all intrigues you, you can involved here http://www.operationwardiary.org/ – and read more about the project in this blog post. You can follow Operation War Diary on Facebook and Twitter too!

I shared this today, in a presentation, as an example of the many projects created by the Zooniverse team, which could be described as ‘Social Machines’. I work as part of a project called SOCIAM, which is investigating these Social Machines as a research topic.

Social machines were predicted in the early days of the web and are an emergent, social entity typified by large group of people working together online to achieve things neither the machine nor the human network could otherwise achieve. Zooniverse projects are a great example of a social machine for scientific discovery, which is why we were invited to join this collaboration.

Dave DeRoure's 'classic' social machines explanation chart

Dave DeRoure’s ‘classic’ social machines explanation chart

 

Social machines often involve very large-scale human participation; they may generate large volumes of data; and they may try to solve social or technical problems from the reverse perspective. Wikipedia is a social machine too, for example, as is Reddit, eBay, and Ushahidi.

Operation War Diary and the other Zooniverse projects combine people in a way that can only be achieved through the web, and many of the participants then contribute in new and unexpected ways*, enriching the overall output of the platform. This makes it a notable social machine, and a great citizen science platform.

Sorry – this has been a rather rambling post. So to conclude here’s a link to many more Operation War Diary maps, prodded by Jim, on CartoDB: https://the-zooniverse.cartodb.com.

[* See Spanish Flu in Old Weather, Yellow Balls in MWP, Green Peas in Galaxy Zoo]

A new Milky Way Project paper was published to the arXiv last week. The paper presents Brut, an algorithm trained to identify bubbles in infrared images of the Galaxy.

bubble_gallery_sorted_v2

Brut uses the catalogue of bubbles identified by more 35,000 citizen scientists from the original Milky Way Project. These bubbles are used as a training set to allow Brut to discover the characteristics of bubbles in images from the Spitzer Space Telescope. This training data gives Brut the ability to identify bubbles just as well as expert astronomers!

The paper then shows how Brut can be used to re-assess the bubbles in the Milky Way Project catalog itself, and it finds that more than 10% of the objects in this catalog are really non-bubble interlopers. Furthermore, Brut is able to discover bubbles missed by previous searches too, usually ones that were hard to see because they are near bright sources.

At first it might seem that Brut removes the need for the Milky Way Project –  but the ruth is exactly the opposite. This new paper demonstrates a wonderful synergy that can exist between citizen scientists, professional scientists, and machine learning. The example outlined with the Milky Way Project is that citizens can identify patterns that machines cannot detect without training, machine learning algorithms can use citizen science projects as input training sets, creating amazing new opportunities to speed-up the pace of discovery. A hybrid model of machine learning combined with crowdsourced training data from citizen scientists can not only classify large quantities of data, but also address the weakness of each approach if deployed alone.

We’re really happy with this paper, and extremely grateful to Chris Beaumont (the study’s lead author) for his insights into machine learning and the way it can be successfully applied to the Milky Way Project. We will be using a version of Brut for our upcoming analysis of the new Milky Way Project classifications. It may also have implications for other Zooniverse projects.

If you’d like to read the full paper, it is freely available online at at the arXiv – and Brut can found on GitHub.

[Cross-posted on the Milky Way Project blog]

ttfnrob:

New Zooniverse project goes live today and I warn you: it is highly addictive!

Originally posted on Zooniverse:

avatar_sunspotter

A few months ago we quietly placed a new project online. Called Sunspotter, it was essentially a game of hot-or-not for sunspot data – and since there were not many images available at the time, we thought it best to just let it be used by the people who noticed it, or who had tried it during the beta test. The results have since been validated, and the site works! In fact there are even preliminary results, which is all very exciting. Loads of new images have now been prepared, so today Sunspotter gets its proper debut. Try it at www.sunspotter.org.

On the site you are shown two images of sunspot groups and asked which is more complex. That might sound odd at first, but really it’s quite easy. The…

View original 108 more words

Orbiting Links

May 6, 2014 — Leave a comment

Screenshot 2014-05-06 22.04.29

I’ve added a new section to Orbiting Frog today: Orbiting Links (http://links.orbitingfrog.com). This new page displays an automated set of URLs currently being shared by the astronomers of Twitter. This is a work in progress, but it seems to be producing good results so far.

Orbiting Links is created by taking a small set of my favourite astro-Tweeters, and following their tweets, and the tweets of the people they follow too. As links are shared, I store them and keep track of how often they are retweeted or posted elsewhere. Those that rise to the top in any 24-hour period are displayed on the page. Each URL that makes it to this page has some details attached to it, including the original tweet that the system spotted it in.

I’m tracking a bunch of my favourite go-to astronomers on Twitter. The accounts they follow are also monitored, up to about 5,000 accounts. It isn’t necessarily those people that will rise to the top here though – but more likely the sources of the links they share. I will continuously modify the list of source accounts, to maximise the usefulness of this page.

Why Do This?

To find interesting stuff! The topics will vary day-to-day, and sources of interesting links should rise to the top organically. I see this as an alternative news source, delivering material aligned with the interests of my peers on Twitter. It’s an experiment too – and a coding project I’ve been wanting to build for a while now. The source code is on GitHub, forked from the original OpenFuego repo.

Resources Used

This site has built on top of several other projects, many of which I have slightly modified. The back-end is written in PHP and the front-end is HTML+JavaScript.

  • OpenFuego: Created by Andrew Phelps of the Nieman Journalism Lab, OpenFuego is the open-source version of Fuego, a Twitter bot created to track the future-of-journalism crowd and the links they’re sharing.
  • Type & Grids: You can find many amazing website templates on Type & Grids. All of them are responsive and well-commented, and many of them are free.
  • Twitter: Microblogging site Twitter is still one of my favourite things about the web, even after all these years!

Future Development

The current to-do list for this project includes an RSS feed and a Twitter account, which will provide other ways to access the same set of links. If you have ideas for how this projects should evolve, please get in touch.