Archive for the ‘Earth Indicator’ Category

Levels of particulate pollution rose above 130 micrograms per cubic meter in Salt Lake City on January 23, 2013.  That’s three times the federal clean-air limit, according to the U.S. Environmental Protection Agency.  Or, as the Associated Press put it, roughly equivalent to Los Angeles air on a bad day.

“The Salt Lake Valley has some of the worst wintertime pollution episodes in the West due to the combination of its increasing population and urbanization, as well as its geometry, with the valley sandwiched between mountain ranges to the west and east,”  University of Utah atmospheric scientist John Horel explained in a recent story published by the National Center for Atmospheric Research (NCAR).

Normally temperatures decline with altitude, but meteorological conditions across the Great Basin set up a situation in which the opposite occurred for a prolonged period. For much of January, temperatures increased with altitude, creating what’s known as a temperature inversion. During the peak of the inversion, temperatures were -15.5ºC (4.1ºF) at the surface and 7.6ºC (45.7ºF) at 2,130 meters (6,988 feet), University of Utah meteorologist Jim Steenburgh reported on his blog.

The result? The layer of warm air on top functioned like a lid, trapping pollutants in the valleys and preventing winds from dispersing the haze. The photograph above, taken by Roland Li on January 19, shows the sharp boundary an inversion can create.

Air quality problems caused by temperature inversions are not unique to Salt Lake City.  Cities in California (including Bakersfield, Fresno, Hanford, Los Angeles, Modesto) and Pittsburgh, Pennsylvania, have some of the most severe short-term particle pollution problems in the nation, according to a list published by the American Lung Association.

Such events are a reminder that the United States is not immune from dangerous spikes in particulate pollution, though particulate levels don’t get as high as they do in some other parts of the world. (See this map of fine particulates for a global perspective.) Eastern China, for example, recently suffered through an extreme pollution event that was far more potent than the Salt Lake City event.

More:
Utah Department of Environmental Quality: About Inversions

Salt Lake City Office of the National Weather Service: What Are Weather Inversions?
Jeff Masters: Dangerous Air Pollution Episode in Utah

On September 16, 2012, the extent of sea ice in the Arctic Ocean dropped to 3.41 million square kilometers (1.32 million square miles). The National Snow and Ice Data Center (NSIDC) issued a preliminary announcement on September 19 noting that it was likely the minimum extent for the year and the lowest extent observed in the 33-year satellite record.

This year’s Arctic sea ice minimum was 760,000 square kilometers (290,000 square miles) below the previous record set on September 18, 2007. (For a sense of scale, that’s an ice-area loss larger than the state of Texas.)

The previous record minimum, set in 2007, was 4.17 million square kilometers, or 1.61 million square miles. The September 2012 minimum was 18 percent below the 2007 minimum, and 49 percent below the 1979–2000 average minimum.

Graph courtesy National Snow and Ice Data Center.

Arctic sea ice typically reaches is minimum extent in mid-September, then begins increasing through the Northern Hemisphere fall and winter. The maximum extent usually occurs in March. Since the last maximum on March 20, 2012, the Arctic has lost a total of 11.83 million square kilometers (4.57 million square miles) of sea ice.

NSIDC uses a five-day running average for sea ice extent, and calculations of sea ice extent have been refined from those of previous years. For more information on Arctic sea ice in 2012, visit NSIDC’s Arctic Sea Ice News and Analysis blog and the NASA news release on the observation. To learn more about sea ice basics, see the sea ice fact sheet.

Earth Indicator: 4 million

August 30th, 2012 by Michon Scott

This week’s Earth Indicator is 4 million…as in 4 million square kilometers. It’s a number that scientists studying sea ice never thought they would see.

Every year, the sea ice at the top of our planet shrinks and grows with the seasons. But because ocean water lags behind the atmosphere in warming up and cooling down, Arctic sea ice reaches its maximum extent in March, and its minimum in September.

Since 1979, satellites have observed Arctic sea ice shrinking and growing, and scientists at the National Snow and Ice Data Center (NSIDC) have recorded the minimum and maximum extents, adding them to a growing archive of data about Earth’s frozen places. In tracking sea ice, NSIDC uses a five-day average to smooth out bumps in the daily sea ice extents. The numbers have been refined over time with the adoption of new processing and quality control measures, but the ballpark numbers have remained the same.

Ten years ago, NSIDC scientists noted that Arctic sea ice set a new record minimum at 5.64 million square kilometers on September 14, 2002. New records followed: 5.32 million square kilometers on September 22, 2005; and 4.17 million square kilometers on September 18, 2007. In August 2007, Arctic sea ice extent dropped so rapidly that it set a new record before the usual end of the melt season. (Researchers at the University of Bremen declared a new record in 2011, based on slightly different calculation methods.)

In the summer of 2012, not only was the 2007 record broken, but within days, the passed another barrier. As of August 28, 2012, the five-day running average of Arctic sea ice fell below 4 million square kilometers (1.54 million square miles). NSIDC director Mark Serreze remarked: “It’s like breaking the four-minute mile. Nobody thought we could do that, but here we are.”

Arctic sea ice extent graph. Courtesy National Snow and Ice Data Center.

But whereas the four-minute mile was a milestone of human achievement, 4 million square kilometers of sea ice is not so inspiring. “Arctic sea ice extent is declining faster than it used to, and we can expect a continuing decline overall, with some inter-annual variability,” said NSIDC lead scientist Ted Scambos. “We have a less polar pole.”

The low sea ice extent for 2012 fits within a larger pattern. For the past several years, September Arctic sea ice extent numbers have remained well below the 1979–2000 average of 6.70 million square kilometers. The low extent from 2012 is no outlier.

Arctic sea ice minimum extents (million square kilometers)

September 22, 2005: 5.32

September 17, 2006: 5.78

September 18, 2007: 4.17

September 20, 2008: 4.59

September 13, 2009: 5.13

September 21, 2010: 4.63

September 11, 2011: 4.33

Reflecting on sea ice extent falling below 4 million square kilometers, NSIDC scientist Walt Meier remarked: “It’s just a number, but it signifies a changing Arctic.”

Earth Indicator: 3σ

August 7th, 2012 by Adam Voiland

Is that a three omicron? Nope. Three, rho? Strike two. Our latest Earth Indicator is three-sigma.

In Greek, sigma (σ) is the 18th letter of the alphabet. In statistics, it’s a symbol for standard deviation, a measure of how spread out a set of data points are from the average (which is often called the mean by statisticians). Data with a low standard deviation indicates that the data points are bunched up and close to the mean. A high standard deviation indicates the points are spread over a wide range of values.

In a standard bell curve, most data points (68 percent) fall within one standard deviation (1σ) of the mean (see the pink section in the graph below). The vast majority (95 percent, the combined pink and red sections of the graph) fall within two standard deviations (2σ). An even higher percentage (99.7 percent, the combined pink, red, and blue sections) fall within three standard deviations (3σ) of the mean. Just a tiny fraction of points are outliers that are more than three standard deviations from the mean. (See the parts of the graph with arrows pointing to 0.15%).

Now imagine that instead of generic data points on a generic bell curve the values are actually measurements of summer temperatures. That will give you a foundation for understanding the statistical analysis that James Hansen published this week in the Proceedings of the National Academy of Sciences.

One of his main findings: as seen in the graph above, is that the range of observed surface temperatures on Earth has shifted over time, meaning that rare 3-sigma temperature events (which represent times when temperatures are abnormally warm) have grown more frequent because of global warming.

Here’s how Hansen puts it:

We have shown that these “3-sigma” (3σ) events, where σ is the standard deviation — seasons more than three standard deviations removed from “normal” climate — are a consequence of the rapid global warming of the past 30 years. Combined with the well-established fact that the global warming is a result of increasing atmospheric CO2 and other greenhouse gases, it follows that the increasingly extreme climate anomalies are human-made.

Want to learn more? A press release is available here. And Hansen has also published a summary of his paper (pdf) and an accompanying Q & A (pdf) that offer more details.

 

This Week’s Earth Indicator: 90

May 8th, 2012 by Adam Voiland

Credit: NASA

We bring you this week’s indicator—90—with a sigh.

Ninety is the combined number of Earth-observing instruments on NASA and NOAA satellites that are currently monitoring our planet. And that number is about to plunge, according to a National Research Council report released in May 2012. By 2020, there could be less than 20 instruments in orbit, and the total number of missions is expected to fall from 23 to just 6.

Many of the these space-based instruments aren’t exactly household names. (MODIS, anyone? ASTER or ALI? AMSU-A or SORCE?)  Still, they are our eyes and and ears on the planet, as indispensable to understanding how it works and changes as our human senses are to navigating life on the surface. Without these satellites, the United States would be blind to most Earth systems, unable to effectively monitor the effects of global warming and the constant parade of volcanic eruptions, wildfires, droughts, dust storms, hurricanes, crop health, air pollution.

The NRC study authors mince few words in explaining what the reduction would mean:

These precipitous decreases warn of a coming crisis in Earth observations from space, in which our ability to observe and understand the Earth system will decline just as Earth observations are critically needed to underpin important decisions facing our nation and the world. Advances in weather forecast accuracy may slow or even reverse, and gaps in time series of climate and other critical Earth observations are almost certain to occur.  When these long-running data streams fall silent, users requiring these observations will go unsupported, and progress toward understanding the Earth system and how it supports life may stagnate.

It’s worth noting that the committee only counted missions that have been officially proposed, funded, and given a launch date. It did not include missions that will likely come to fruition but have not yet been fully funded (the successor mission to GRACE, for example). That means the future fleet might not be quite as small as feared, but even the most optimistic estimates indicate a major decline in observing capability.

Media outlets including Wired and EarthSky have more details. The Washington Post wrote an editorial on the subject. NASA Chief Scientist Waleed Abdalati (video below) explains the importance of Earth-observing satellites and points out that most of those in orbit are already past their design lives and living on “borrowed time.”

This Week’s Earth Indicator: 76

April 11th, 2012 by Adam Voiland


This map, based on data from the Moderate Resolution Imaging Spectroradiometer (MODIS), shows average aerosol amounts around the world for March 2012. An optical thickness of less than 0.1 ( palest yellow) indicates crystal clear sky with maximum visibility, whereas a value of 1 (reddish brown) indicates very hazy conditions.


This week’s indicator: 76. No, that is not a reference to the return time of Halley’s Comet (76 years) or the atomic number of the world’s densest natural element (the metal osmium). In this case, 76 is a percentage.  And it’s a particular percentage that represents how much of the variability in North Atlantic sea temperatures new climate simulations attribute to small airborne particles called aerosols. British scientists at the Met Office Hadley Center ran the simulations, and Nature published the numberin a recent issue.

North Atlantic sea temperatures have gone through warm and cool phases over the last 150 years (a phenomenon called the Atlantic Multidecadal Oscillation, or AMO). The sea was cool, for example, during the 1900s–1920s and 1960s–1990s, while a warm phase occurred in the 1930s–1950s (see graph below). Since the mid-1990s, the North Atlantic has been in a warm phase.

The difference between average ocean surface temperatures over the North Atlantic and those over the global oceans has oscillated between cool and warm phases. Figure from the April 4, 2012 edition of Nature.

That may sound like arcane trivia, but the cycling of North Atlantic sea temperatures matter. Earlier research has linked its phase (warm or cool) to high-stakes weather events, such as the frequency of Atlantic hurricanes and drought in the the Amazon Basin and the Sahel. Cool phases, for example, have coincided with decreased rainfall in the Amazon, more Atlantic hurricanes, and increased rain in the Sahel.

Conventional wisdom has held that the cycling of North Atlantic sea temperature is a natural phenomenon driven by ocean currents. The new climate simulations suggest that aerosols are the real culprit. The British team considered a number of aerosol types in their analysis but the most important was one called sulfates, which come from volcanic eruptions and from humans burning fossil fuels.

The researchers used a state-of-the-art climate model to see if they could reproduce the changes in North Atlantic sea temperatures seen over the last 150 years. This wasn’t the first time scientists have tried this, but it was the first time any group did it so accurately. And the key to their success, the British team concluded, was that they incorporated better estimates of how aerosols affect clouds—something that most previous models omitted or only partially included.

How do aerosols (for the sake of simplicity, let’s just call it pollution for the moment) affect clouds and how does that affect sea surface temperature?  In short, pollution tends to brighten clouds (see illustration above) causing the clouds to reflect more light back to space and cool the sea.

Clouds in clean air are composed of a relatively small number of large droplets (left). As a consequence, the clouds are somewhat dark and translucent. In air with high concentrations of aerosols, water can easily condense on the particles, creating a large number of small droplets (right). These clouds are dense, very reflective, and bright white. This influence of aerosols on clouds is called the “indirect effect.” NASA image by Rob Simmon.

After taking this indirect aerosol effect into account, the British team’s simulations suggested that the majority (the number they came up with was, of course, 76 percent) of the observed variability in sea temperatures seen since 1860 was due to cooling caused by sulfates from volcanic eruptions and from the buildup of industrial pollution. The results of the simulation also imply that sea  temperatures have risen in recent decades because clean air regulations passed in the United States and Europe in the 1960s and 1970s have reduced levels of air pollution.

If the new simulation is right, it would be a big deal. It would not only mean, as a Nature News & Views article pointed out, that humans are the key factor driving changes in sea surface temperatures (and that cleaning up the air could be fueling hurricanes); it would also mean that the Atlantic Multidecadal Oscillation (AMO) doesn’t really exist.

Before you start mourning its death, however, realize there are some indications that this latest simulation may not be right. Understanding how aerosols affect clouds remains a young science, and the British team may have made some incorrect assumptions about how aerosols affect particular types of clouds. Plus, the model didn’t reproduce changes in the frequency of outbreaks of African dust storms, something that can affect the temperature of the tropical Atlantic. On top of all that, a number of other studies have come to very different conclusions.

Bottom line: stay tuned. Things can get messy at the cutting-edge of science, but we’ll be keeping our eye out to see if and how long the 76 percent number holds.

Science is full of numbers and here at the Earth Observatory we know they can sometimes be contradictory and confusing. In our new Earth Indicator column, we’ll pick a number from the many floating around in the science or popular press, unpack where it came from, and explain what it means. Also, a tip of the hat to NPR’s Planet Money team. They have a Planet Money indicator on their podcast that we like so much we decided to steal the name.