Archive for the ‘visualization’ Category

Crafting the Blue Marble

October 6th, 2011 by Robert Simmon

 
One of the best surprises of my life was turning on my brand-new iPhone—before it had even been activated—glancing down at the screen, and seeing an image I had made. Apple chose the NASA Blue Marble for the default welcome screen and wallpaper, and I had no idea beforehand.

Here’s how I did it.

In 2002 my colleague Reto Stöckli (now back in Switzerland) was working on a global map of the Earth that was going to blow away all previous examples. A new NASA satellite (Terra) was gathering the first color pictures of the entire Earth, and we wanted to showcase the imagery. Reto put together about 10,000 satellite scenes (each file over 300 MB) collected over 100 days, stripped out the clouds, and created a 43,200-pixel by 21,600-pixel map of the Earth in (this was the hard part, everything I did afterwards was just adding chrome).

Original land surface Blue Marble image.

Now that we had a source image, we needed to create something evocative, something that would show the potential of the imagery. To us, at least, the obvious choice was to render a few 3D views of the world as it would look from space: echoes of the famous Apollo 17 photograph.

AS17-148-22727

To make the Earth look realistic, or at least how I imagined the Earth would look, I needed to do some work. First of all, the satellite images weren’t usable over deep water (it collects data, but there’s no automated process to detect clouds and correct for the atmosphere), so I needed to add some color into the water. NASA measures chlorophyll in the ocean (a way of monitoring phytoplankton), so I grabbed a month’s worth of that data, colored it blue and green (I looked at individual satellite images to get a sense of what hues to use), and used that map for the ocean. I also had to add a stand-in for sea ice, since it’s impossible to measure chlorophyll beneath a few meters of snow and ice. At least that was simple–I just replaced missing data near the poles with white. In addition to the sea ice, I brightened and reduced the saturation of Antarctica, which was pasted into the original from a different dataset. The combined ocean color and ice look like this:

Chlorophyll, sea ice, and Antarctica

Throw in a map of clouds stitched together from 200 satellite scenes, and a global topographic map to add some texture in the landscape, and I was ready to bring everything into my 3D software (Electric Image at the time). Wrapping a rectangular image onto a sphere and rendering out images was probably the simplest and fastest part of the entire process. It’s much easier to fine-tune an image with each component of the image rendered separately, so I made individual renders of the land and ocean, specular highlight, clouds, a couple day/night masks, and atmospheric haze (which I never did get quite right). [Click on the image below to download a zip file with each layer as a separate JPEG].

Compositing separate images into a convincing whole is (of course) easier said than done. Even with control of each layer in my image processing software (Photoshop) it took hours of tweaking and re-tweaking transparency, layer masks, hue, saturation, gaussian blur, and curves to get an image that looked like the picture I had in my head. Since it was before adjustment layers were introduced (with which I could have saved all the settings), I have no idea exactly what I did. Making the clouds appear opaque, while remaining white, rather than gray, was by far the hardest part. It was also tricky trying to get the atmosphere to appear most transparent in the center, and thicker and bluer near the edges. Looking at the Photoshop file, I’ve got two atmospheric layers and two cloud layers, each set to different levels of transparency, over a combined land and ocean layer, with sunglint (a specular highlight) off Baja California.

At the time, I had no idea the Blue Marble would be seen by so many people: it was just a way to show off some cool data. It’s based on the hard work of the literally thousands of scientists, engineers, programmers, admin staff, and others here at NASA; especially the MODIS team. I’m glad Steve Jobs seemed to like it, and I’m sad he died so young.

Made on a Mac, of course. Thanks, Steve.

A Few Notes
All the source files are archived on the original Blue Marble page. They’re free to use and modify, but you can’t use them to imply you’re associated with NASA.

We’ve subsequently made a new and improved version of the base maps, the Blue Marble Next Generation. It’s not only twice the resolution (86,400 pixels by 43,200 pixels), but there’s a separate image for each month, so you can see the changing seasons. Reto didn’t make new clouds because it’s a really long and painful process of stitching images together by hand that’s never going to be perfect.

In the existing cloud map some people have noticed a few repeating features that appear photoshopped. They are. There are gaps between orbits near the equator, and there’s no way to fill them with real data. The specular highlight off Baja and the thickness and fuzziness of the atmosphere I based off full-disk GOES images. There’s a weird streak in the clouds near Greenland that’s entirely due to an error on my part, and I have no idea why the shading on the east coast of Greenland is incorrect.

Yes, it’s centered on North America: I’ve spent the vast majority of my life here, and I’m biased. I did, however, make a version centered on South Asia at the same time, as well as a rotating Earth centered on the Equator. I’ve subsequently done a few more versions, including the Pacific Ocean. I’m still not happy with the shading of the atmosphere—anyone know how to simulate Rayleigh scattering in Maya?

Visualization Secrets

August 22nd, 2011 by Robert Simmon

“… complex datasets require complex visualizations. In general though, simpler is usually the best way to go in the sense that you should make it as easy as possible for a reader to understand what’s going on. You’re the storyteller, so it’s your job to tell them what’s interesting.”

—Nathan Yau, author of Visualize This: The Flowing Data Guide to Design, Visualization, and Statistics and the Flowing Data blog (that I should read more frequently). Found on SmartPlanet.

See Something or Say Something: Washington, DC

August 8th, 2011 by Robert Simmon

Every visualization blog on the planet has already posted one or two of these, but they’re awesome, so here is what Washington, DC looks like via Tweets (blue) and photos posted to Flickr (orange). White areas have both tweets and photos.

See Something, Say Something: Washington, DC

By Eric Fisher.

Unsurprisingly, tourist areas are dominated by photos, residential areas by tweets. More here: See something or say something. Visualization by Eric Fischer. H/T to Visual Complexity.

Map Projections Matter

February 24th, 2011 by Robert Simmon

A few weeks ago I stumbled on this headline and image from the UK Daily Mail Online:

World of two halves! Map shows most of Northern Hemisphere is covered in snow and ice.
Global cylindrical equirectangular map of snow and ice.

Most of the Northern Hemisphere was covered in snow and ice a few weeks ago? (The image dates from late January/early February—I couldn’t find the exact date.) Really? At first glance it’s a plausible claim, but there’s a problem. The map is in a cylindrical equirectangular projection, which distorts relative areas—regions north and south of the equator appear larger on the map than they are in reality. The higher the latitude, the larger the exaggeration. As a result, a much higher percentage of the Earth’s surface appears to be covered in snow or ice than really is.

After transforming the map to an equal-area projection (in this case Mollweide, which also preserves straight lines of latitude) it’s obvious that most of the Northern Hemisphere remains snow and ice free, even in mid-winter:

Global map of snow and ice in the Mollweide projection.

A map showing just the Northern Hemisphere (azimuthal equal area, centered on the North Pole) makes is yet more clear:

Northern Hemisphere map of snow and ice in an azimuthal equal area projection.

For maps of measured quantities on the Earth’s surface (like snow, temperature, rainfall, or vegetation) it’s important to choose a projection carefully, to minimize misunderstandings of the underlying data. It’s far too easy for a map to exaggerate one area at the expense of another. It’s also important to keep projections consistent when displaying a time series, or comparing datasets to one another.

Despite the major flaw of not being equal area, cylindrical equirectangular (which goes by many other names) is very useful: it’s the standard projection for importing into a 3D program and wrapping around a sphere, and it’s easy to define the corner points and scale for import into software to transform to other map projections. I did all the reprojections with the excellent tool G.Projector, which I’ve written about before.

For more information about map projections, see the USGS page Map Projections, the National Atlas’ Map Projections: From Spherical Earth to Flat Map, and the Wolfram Mathworld Map Projection site. For an in-depth discussion, read Map Projections—A Working Manual, (PDF) also from the USGS.

(As far as I can tell, the snow and ice map was originally from the NOAA Environmental Visualization Laboratory. Unfortunately, I couldn’t find archived images on their site, so I had to use the original low resolution and highly compressed image from the Daily Mail.)

What Not To Do: Vertical Exaggeration

November 5th, 2010 by Robert Simmon

One of my (many) pet peeves in data visualization is vertical exaggeration. For example, here’s a 3D rendered view (from the south looking north) of Mount Etna:

3D topographic image of Mount Etna.

Image ©2010 Infoterra.

Compared to the real thing, photographed from the International Space Station (from the north looking south):

Photograph of Mount Etna from the International Space Station.

Astronaut photograph ISS006-E-31042.

The 3D view is scaled so the volcano appears much higher than it does in real life—perhaps four or five times higher—but it’s impossible to tell since the caption doesn’t say. My big problem with this is that Etna looks like a classic, steep-sided stratovolcano (like Mount Fuji), rather than a complex mountain formed from a combination of viscous lavas (typical of stratovolcanoes), fluid lavas typical of shield volcanoes (like the Hawaiian Islands), and collapses (like Mount St. Helens).

At least it’s not as bad as the infamous image of Maat Mons on Venus, which has a staggering vertical exaggeration of 22.5 times (Maat Mons is actually shaped more like a wad of gum on the sidewalk, not Mount Rainier):

3D image of Venus's Maat Mons.

Venus’s Maat Mons, vertically exaggerated 22.5 times! Image courtesy NASA/JPL.

Why does it matter? Because topography gives clues to the underlying geology and processes that form a landscape. For example, the angle of the walls of the Grand Canyon is determined by the rock type: hard rocks form cliffs, soft rocks form slopes. The vertical granite walls of Yosemite are composed of hard granite that resisted the erosion of ice age glaciers.

The reason for the shape of mountains on Venus is perhaps even more interesting. It’s so hot on the surface (465°C) that most of the rocks creep: over time solid materials deform under their own weight. This limits the height of mountains on Venus, and ensures that steep slopes—if they exist at all—are extremely rare (and likely indicative of active tectonics). The vertical exaggeration used in images of Venus’ surface obscures some of the fundamental processes that shaped the planet.

One final note: If you absolutely have to use vertical exaggeration, at least indicate that fact in the caption, or even better, include a scale on the image itself.

[Credit where credit is due: I first learned of Maat Mons from one of Edward Tufte's lectures on data visualization. Tufte himself cited the “Flat-Venus Society” and NASA's David Morrison.]

Cynthia Brewer

August 17th, 2010 by Robert Simmon

Whenever I invite someone to talk at our monthly “education and outreach” colloquia I seem to be out of town when the talk is scheduled. Sure enough, last Wednesday when Cynthia Brewer was here (at Goddard Space Flight Center, near Washington, DC) I was in Los Angeles.

Dr. Brewer is a geography professor at Penn State (as well as an author of several books on design and cartography: Designed Maps: A Sourcebook for GIS Users and Designing Better Maps: A Guide for GIS Users) specializing in research on effective map design, especially the use of color in maps. Since we make a lot of maps, ColorBrewer, her on-line tool for selecting color schemes, has been an invaluable tool. By all accounts she gave an excellent talk, I’m sorry I missed it.

ColorBrewer screenshot

A screen shot of ColorBrewer.

Imaging Ash

August 9th, 2010 by Robert Simmon

The Elegant Figures blog will be a place for me (Robert Simmon, the Earth Observatory’s lead visualizer) to talk about some of the data visualization and information design we do on the Earth Observatory. I’m going to kick things off with the description of an image we made in May that showed ash from the Eyjafjallajökull Volcano using data from two satellites. The image benefits from a little science backstory, but feel free to skip down three paragraphs if you’re only interested in the infovis aspects.

For several days in April of 2010, air traffic in Europe was almost completely shut down by ash from Iceland’s Eyjafjallajökull volcano. The widespread flight cancellations weren’t caused by ash filling the skies all over Europe, but by uncertainty in the ash’s location. Without knowing exactly where the ash was, air traffic controllers couldn’t risk allowing passenger flights to embark.

Eyjafjallajökull erupting on May 18, 2010.

Eyjafjallajökull erupting on May 18, 2010.

Currently, ash forecasts are based on computer models that predict the movement of volcanic ash based on its observed location and altitude, combined with wind speed and direction. In the case of Eyjafjallajökull, the initial location of the ash was known (the volcano’s summit), but not the altitude. This initial uncertainty grew as the ash blew towards Europe, dispersing and moving up and down in the atmosphere. The only way to constrain the computer model’s forecasts is to observe the ash—something that’s difficult to do from the ground (there is a network of instruments that measure aerosols (small atmospheric particles—volcanic ash is one type), AERONET, but they’re spread too sparsely to enable precise predictions), and it is difficult and dangerous to directly sample in the air. However: flying in space, far above the ash, some satellites can track ash, even as it spreads over long distances. Forecasters can use measurements from these satellites to improve predictions of ash movement, reducing the amount of airspace closed to flights.

The CALIPSO (the acronym is a bit more memorable & succinct than its formal name: the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation) satellite uses reflected light from a laser beam to measure cloud and aerosol particles in the atmosphere. Critically for ash forecasting, it measures both the location and altitude of particles. It can also see ash within clouds. There’s one catch: CALIPSO only measures aerosols in a line directly underneath the satellite.

On the Earth Observatory, we wanted to show CALIPSO data to complement the large number of visible light, photo-like images we’d acquired, and to advertise the data, which is considered experimental. The nature of the data presented an interesting visualization challenge: it’s a two-dimensional curtain stretching from the Earth’s surface to the stratosphere, along the satellite ground track. Simply displaying the data by itself isn’t very informative:

Vertical profile of ash from Eyjafjallajökull on May 16, 2010.

Vertical profile of ash from Eyjafjallajökull on May 16, 2010.

The data need context: a map of the area (just off the west coast of Ireland) & the location of the ash at the time CALIPSO passed overhead. Unfortunately, the data were acquired at night—so we couldn’t simply use a natural-color image. During the day ash is often pretty easy to spot:

Eyjafjallajökull ash mixed with clouds in the skies above Germany, April 16, 2010.

Eyjafjallajökull ash mixed with clouds in the skies above Germany, April 16, 2010.

Our eyes can pick out the gray or brown plume, even if it’s mixed with clouds (at least if some of the ash is above the clouds). At night, however, satellites observe clouds with thermal infrared data, which is essentially a measure of temperature. Volcanic ash and clouds (at the same altitude) will almost always have the same temperature, and will look the same in thermal infrared imagery. In addition a thin ash plume may be completely invisible in thermal infrared wavelengths (data from MODIS, or more formally the Moderate resolution Imaging Spectroradiometer):

Thermal infrared image (inverted: cold clouds are white) of Eyjafjallajökull ash near Ireland and the U.K.

Thermal infrared image (inverted: cold clouds are white) of Eyjafjallajökull ash near Ireland and the U.K.

Ash, however, emits thermal infrared radiation slightly differently than water, so two images taken at different wavelengths (in this case 11µm and 12µm—both in the thermal infrared) will appear different from each other where there’s ash, but not where there are clouds. By subtracting the 11 and 12µm images from one another, you end up with an image that shows ash. Sortof:

Split window image of ash from Eyjafjallajökull.

Split window image of ash from Eyjafjallajökull.

It isn’t perfect, but the technique (called a split window) gives a qualitative picture of ash distribution.

By itself a split-window image isn’t very informative: for the most part ash appears slightly lighter than the background, whether it is ocean, cloud, or land. We needed to come up with a trick to better distinguish ash from the background. Our first attempt was to combine the split window image (12µm minus 11µm) in the red channel with the original bands, 11µm and 12µm in the green and blue channels. Next we combined two different split windows (using different wavelengths) and an inverted thermal infrared channel (so clouds would at least be lighter than water). Neither worked:

First split window technique
Second split window composite

2nd and 3rd attempts at visualizing the split window.

Both were unattractive and (even worse) neither showed the ash particularly well. They’re understandable if you’ve spent a career analyzing satellite images, but not if you’re a novice. The image needed to be somewhat familiar, and the ash needed to clearly stand out from the background. I ended up using an image compositing technique. With Photoshop (this would work in any good photo-editing program) I combined the split window image (which showed the ash) with an inverted copy of one of the thermal infrared channels using a layer mask. With a layer mask, bright areas in a grayscale image will be opaque, and dark areas will be transparent. By assigning the layer mask to a solid yellow image layer, the ash appeared as yellow areas on a background similar to the satellite images shown on every TV weather forecast:

Final image combining the ash with thermal infrared data.

Final image combining the ash with thermal infrared data.

The ash stands out from the background because the eye is actually highlighting the yellow areas of the image before the image even reaches the conscious brain. This is known as pre-attentive processing which I learned about in Colin Ware’s book Information Visualization: Perception for Design.

Here’s the final image, combined with the CALIPSO data showing the vertical profile of the ash:

CALIPSO ash profile and MODIS split window

CALIPSO ash profile and MODIS split window