Archive for November, 2016

What on Earth is an Anaglyph?

November 21st, 2016 by Abbey Nastan, MISR team at JPL

The science team behind the Multi-angle Imaging SpectroRadiometer (MISR) on NASA’s Terra satellite frequently publishes special images called stereo anaglyphs. For example, you might have seen our recent series of anaglyphs celebrating the centennial of the National Park Service. But what exactly is an anaglyph, and how is one made from MISR data?


All methods of viewing images in three dimensions rely on the fact that our two eyes see things at slightly different angles; this is what gives us depth perception. As a simple demonstration, hold a finger at a short distance from your face, and close one eye at a time. You will notice that your finger appears to be in a different place with each eye. The horizontal distance between the two versions of your finger is called the parallax. Your brain interprets the amount of parallax to tell you how far away your finger is from your face—the greater the parallax, the closer your finger!

However, your brain can also be tricked into thinking that a perfectly flat picture is actually a three-dimensional object by presenting each eye with a slightly different version of the picture. The first 3D viewing technology was the stereoscope, originally invented by Sir Charles Wheatstone in 1838. The stereoscope takes two images viewed from slightly different angles and mounts them next to each other. The photos are viewed using fixed lenses that fool the brain into thinking that it is looking at one picture. Stereoscopes worked well, but their major drawback was that they could only be used by one person at a time.

In 1858, Joseph D’Almeida, a French physics professor, invented a method of showing stereoscopic images to many people at once using a lantern projector equipped with red and blue filters. The viewers wore red and blue goggles. Later, Louis Du Haron adapted this technique to allow anaglyphs to be printed and viewed on paper. In 1889, William Freise-Green created the first anaglyphic motion picture. These early 3D movies were nicknamed “plastigrams” and were very popular by the 1920s.

At the most basic level, anaglyphs work by superimposing images taken from two angles.  The two images are printed in different colors, usually red and cyan. The viewer needs glasses with lenses in the same colors. The lenses are needed to filter out the unwanted image for each eye. So, if the image for the right eye is printed in red, the image can be seen through the cyan lens placed over the right eye, but not through the red lens over the left eye, and vice versa. The brain, seeing two different pictures through each eye, interprets this as a three-dimensional scene.



The reason why the MISR instrument can be used to make anaglyphs is because it has nine cameras, each fixed to point at a different angle. Therefore, as MISR passes over a particular feature on Earth, it captures nine images spanning a range of 140 degrees (diagram above). Any two of these images can be combined to make an anaglyph. The greater the angular difference between the images, the greater the resulting 3D effect; however, if the angular difference is too great, the brain will be unable to interpret the image.

Anaglyphs made with MISR must be rotated so that the north-south direction is roughly horizontal. Though this is inconvenient — we are used to viewing the Earth with north at the top — it is necessary because Terra flies from north to south, and MISR’s cameras are aligned to take images along that track. Therefore, the angular difference between the images is in the north-south direction. Since our eyes are arranged horizontally, the angular difference between the anaglyph images must be horizontal as well.

You can see this by comparing two versions of an anaglyph of Denali, Alaska (below). In the version with north upwards (left), the 3D effect does not work. But when the image is rotated so that north is to the left, suddenly the mountains pop out.


Anaglyphs are useful for science because they allow us to intuitively understand the three-dimensional structure of things like hurricanes and smoke plumes. For example, examine the three-panel image of Typhoon Nepartak below. (All three images have been rotated so north is to the left).


In the top, single-angle image, the eye of the storm appears to be quite deep due to the shadows, but otherwise it is difficult to determine how high the clouds are. Compare this to the middle image, which shows the results from MISR’s cloud top height product; it uses a computer algorithm to compare the data from multiple cameras and determine the geometry of the clouds. Now we can tell that clouds in the central part of the storm are very high (except for the eye), while the spiral cloud bands are slightly lower and there are very low clouds between the arms. However, understanding this data set requires us to interpret the color key and have at least a rudimentary idea of how 16 kilometers compares to 4 kilometers.

Now put on your red-blue glasses* and look at the anaglyph in the third image. All of the features are immediately understood by our brains. While it takes a few minutes (or paragraphs) of explanation to introduce a first-time viewer to MISR datasets, the red-blue glasses make it possible to enjoy the same experience with a simple image. This is why anaglyphs make great tools for scientists as well as for sharing unique views of Earth’s features with the public.

Editor’s note: If you don’t have a pair of red-blue glasses, this page lists companies that sell them. Or if you can find some red and blue plastic wrap, you can make your own. The instructions are here.


November 2016 Puzzler

November 15th, 2016 by Adam Voiland


Every month on Earth Matters, we offer a puzzling satellite image. The November 2016 puzzler is above. Your challenge is to use the comments section to tell us what part of the world we are looking at, when the image was acquired, what the image shows, and why the scene is interesting.

How to answer. Your answer can be a few words or several paragraphs. (Try to keep it shorter than 200 words). You might simply tell us what part of the world an image shows. Or you can dig deeper and explain what satellite and instrument produced the image, what spectral bands were used to create it, or what is compelling about some obscure speck in the far corner of an image. If you think something is interesting or noteworthy, tell us about it.

The prize. We can’t offer prize money or a trip to Mars, but we can promise you credit and glory. Well, maybe just credit. Roughly one week after a puzzler image appears on this blog, we will post an annotated and captioned version as our Image of the Day. After we post the answer, we will acknowledge the person who was first to correctly ID the image at the bottom of this blog post. We may also recognize certain readers who offer the most interesting tidbits of information about the geological, meteorological, or human processes that have played a role in molding the landscape. Please include your preferred name or alias with your comment. If you work for or attend an institution that you want us to recognize, please mention that as well.

Recent winners. If you’ve won the puzzler in the last few months or work in geospatial imaging, please sit on your hands for at least a day to give others a chance to play.

Releasing Comments. Savvy readers have solved some of our puzzlers after only a few minutes or hours. To give more people a chance to play, we may wait between 24-48 hours before posting the answers we receive in the comment thread.

Good luck!

Editor’s Note: Congratulations to James Varghese, David, and John Dierks for being some of the first readers to solve the puzzler on Earth Matters and Facebook. See a labeled version of the November puzzler with a more detailed discussion of El Jorf and the qanats here.


Photo by Adam Voiland. Taken from Rodeo Beach in Marin County, California, in the evening on December 14, 2015.

If you have ever stood on a beach before sunset and gawked at a gleaming line of light extending toward the horizon, you have seen a glitter path.

What causes this spectacular optical phenomenon? Glitter paths are made up of many bright points of light reflecting off tiny ripples, waves, and undulations on the water surface and back at a sensor (for instance, a camera or human eye). Together, these points of light make up areas of sunglint. The appearance of glitter paths on water varies depending on the height of the Sun above the horizon, the height of the surface waves, and the position of the observer.

For instance, if the Sun had been directly overhead and above perfectly calm water, I would have seen a circular reflection that looked much like the Sun in the sky. However, when I took this photograph from Rodeo Beach near sunset in December 2015, the reflection appeared as a long elliptical line of light because the Sun was quite low in the sky. Note how much the roughness of the water surface affected the glitter path. Near the shore, where waves had just broken and the water surface was filled with foam, the glitter path was significantly wider than it was in the smoother waters farther off shore.


NASA image acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) on the Terra satellite on the morning of June 22, 2015.

Sunglint is visible from space as well. In the satellite image of California above, which the Terra satellite captured on the morning of June 22, 2015, notice the line of light running over the Pacific Ocean. This line of sunglint traces the track of Terra’s orbit. If the ocean surface had been completely smooth, a sequence of perfect reflections of the Sun would have appeared in a line along the track of the satellite’s orbit. In reality, ocean surfaces are chaotic and often in motion due to the constant churn of waves and winds. As a result, light reflecting off the surface was scattered in many directions. This left the blurred, washed out line of light along the satellite’s orbital track that you see here instead.

You can learn more about sunglint and glitter paths from stories by Darryn Schneider, Atmospheric Optics, Joseph Shaw, Richard Fleet, NASA Earth Observatory, and Earth Science Picture of the Day.

Editor’s Note: Ground to Space is a recurring series of posts on NASA Earth Observatory’s Earth Matters blog that pairs ground photography and satellite imagery of the same feature or phenomenon. If you have a photograph that you think would be a good candidate, please email Adam Voiland.