Archive for the ‘Earth Indicator’ Category

Why So Many Global Temperature Records?

January 21st, 2015 by Adam Voiland

annual_temperature_anomalies_2014

If you follow Earth and climate science closely, you have noticed that the media is abuzz every December and January with stories about how the past year ranked in terms of global temperatures. Was this the hottest year on record? In fact, it was. The Japanese Meteorological Agency released data on January 5, 2015, that showed 2014 was the warmest year on its record. NASA and NOAA made a similar announcement on January 16. The UK Met Office, which maintains the fourth major global temperature record, posts its data here.

Astute readers then ask a deeper question: why do different institutions come up with slightly different numbers for the same planet? Although all four science institutions have strong similarities in how they track and analyze temperatures, there are subtle differences. As shown in the chart above, the NASA record tends to run slightly higher than the Japanese record, while the Met Office and NOAA records are usually in the middle.

There are good reasons for these differences, small as they are. Getting an accurate measurement of air temperature across the entire planet is not simple. Ideally, scientists would like to have thousands of standardized weather stations spaced evenly all around Earth’s surface. The trouble is that while there are plenty of weather stations on land, there are some pretty big gaps over the oceans, the polar regions, and even parts of Africa and South America.

The four research groups mentioned above deal with those gaps in slightly different ways. The Japanese group leaves areas without plenty of temperature stations out of their analysis, so its analysis covers about 85 percent of the globe. The Met Office makes similar choices, meaning its record covers about 86 percent of Earth’s surface. NOAA takes a different approach to the gaps, using nearby stations to interpolate temperatures in some areas that lack stations, giving the NOAA analysis 93 percent coverage of the globe. The group at NASA interpolates even more aggressively—areas with gaps are interpolated from the nearest station up to 1,200 kilometers away—and offers 99 percent coverage.

See the chart below to get a sense of where the gaps are in the various records. Areas not included in the analysis are shown in gray. JMA is Japan Meteorological Agency data, GISTEMP is the NASA Goddard Institute for Space Studies data, HadCrut4 is the Met Office data, UAH is a satellite-based record maintained by the University of Alabama Huntsville (more on that below), NCDC is the NOAA data, and NCEP/NCAR is a reanalysis of weather model data from the National Center for Atmospheric Research.

If you’re a real data hound, you may have heard about other institutions that maintain global temperature records. In the last few years, a group at UC Berkeley — a group that was initially skeptical of the findings of the other groups — developed yet another approach that involved using data from even more temperature stations (37,000 stations as opposed to the 5,000-7,000 used by the other groups). For 2014, the Berkeley group came to the same conclusion: the past year was the warmest year on record, though their analysis has 2014 in a virtual tie with 2005 and 2010.

Rather than coming up with a way to fill the gaps, a few other groups have tried using a completely different approach. A group from the University of Alabama-Huntsville maintains a record of temperatures based on microwave sounders on satellites. The satellite-based record dates back 36 years, and the University of Alabama group has ranked 2014 as the third warmest on that record, though only by a very small margin. Another research group from Remote Sensing Systems maintains a similar record based on microwave sounders on satellites, although there are a few differences in the way the Remote Sensing Systems and University of Alabama teams handle gaps in the record and correct for differences between sensors. Remote Sensing Systems has 2014 as the 6th warmest on its record.

But let’s get back to the original question: why are there so many temperature records? One of the hallmarks of good science is that observations should be independently confirmed by separate research groups using separate methods when possible. And in the case of global temperatures, that’s exactly what is happening. Despite some differences in the year-to-year rankings, the trends observed by all the groups are roughly the same. They all show warming. They all find the most recent decade to be warmer than previous decades.

You may observe some hand-wringing and contrarian arguments about how one group’s ranking is slightly different than another and about how scientists cannot agree on the “hottest year” or the temperature trend. Before you get caught up in that, know this: the difference between the hottest and the second hottest or the 10th hottest and 11th hottest year on any of these records is vanishingly small. The more carefully you look at graph at the top of this page, the more you’ll start to appreciate that the individual ranking of a given year hardly even matters. It’s the longer term trends that matter. And, as you can see in that chart, all of the records are in remarkably good agreement.

That said, if you are still interested in the minutia of how these these data sets are compiled and analyzed, as well as how they compare to one other, wade through the links below.  Some of the sites will even explain how you can access the data and calculate the trends yourself.

+ UCAR’s Global Temperature Data Sets Overview and Comparison page.

+ NASA GISS’s GISTEMP and FAQ page.

+ NOAA’s Global Temperature Analysis page.

+ Met Office’s Hadley Center Temperature page.

+ Japan Meteorological Agency’s Global Average Surface Temperature Anomalies page.

+ University of Alabama Huntsville Temperature page.

+Remote Sensing Systems Climate Analysis, Upper Air Temperature, and The Recent Slowing in the Rise of Global Temperatures pages.

+ Berkley Earth’s Data Overview page.

+ Moyhu’s list of climate data portals.

+ Skeptical Science’s Comparing All the Temperature Records, The Japan Meteorological Agency Temperature Records, Satellite Measurements of the Warming in the Troposphere, GISTEMP: Cool or Uncool, and Temperature Trend Calculator posts.

+ Tamino’s comparing Temperature Data Sets post.

+NOAA/NASA 2014 Warmest Year on Record powerpoint.

+James Hansen’s Global Temperature in 2014 and 2015 update posted on his Columbia University page.

+The Carbon Brief’s How Do Scientists Measure Global Temperature?

+Yale’s A Deeper Look: 2014’s Warming Record and the Continued Trend Upwards post.

2014 Temperatures From A Regional Perspective

January 16th, 2015 by Adam Voiland

Screen Shot 2015-01-16 at 2.37.18 PM

NASA and NOAA announced today that 2014 brought the warmest global temperatures in the modern instrumental record. But what did the year look like on a more regional scale?

According to the Met Office, the United Kingdom experienced it warmest year since 1659. Despite the record-breaking temperatures, however, no month was extremely warm. Instead, each month (with the exception of August) was consistently warm. The UK was not alone. Eighteen other countries in Europe experienced their hottest year on record, according to Vox.

The contiguous United States, meanwhile, only experienced the 34th warmest year since 1895, according to a NOAA analysis. The Midwest and the Mississippi Valley were particularly cool, while unusually warm conditions gripped the West. California, for instance, went through its hottest year on record. Meanwhile, temperatures in Alaska were unusually warm; in Anchorage, temperatures never dropped below 0 degrees Fahrenheit. 

James Hansen, a retired NASA scientist, underscored this point in an update on his Columbia University website: “Residents of the eastern two-thirds of the United States and Canada might be surprised that 2014 was the warmest year, as they happened to reside in an area with the largest negative temperature anomaly on the planet, except for a region in Antarctica.”

According to Australia’s Bureau of Meteorology, 2014 was the third warmest year on record in that country. “Much of Australia experienced temperatures very much above average in 2014, with mean temperatures 0.91°C above the long-term average,” said the bureau’s assistant director of climate information services.

The map at the top of this page depicts global temperature anomalies in 2014. It does not show absolute temperatures, but instead shows how much warmer or cooler the Earth was compared to a baseline average from 1951 to 1980.  Areas that experienced unusually warm temperatures are shown in red; unusually cool temperatures are shown in blue.

Levels of particulate pollution rose above 130 micrograms per cubic meter in Salt Lake City on January 23, 2013.  That’s three times the federal clean-air limit, according to the U.S. Environmental Protection Agency.  Or, as the Associated Press put it, roughly equivalent to Los Angeles air on a bad day.

“The Salt Lake Valley has some of the worst wintertime pollution episodes in the West due to the combination of its increasing population and urbanization, as well as its geometry, with the valley sandwiched between mountain ranges to the west and east,”  University of Utah atmospheric scientist John Horel explained in a recent story published by the National Center for Atmospheric Research (NCAR).

Normally temperatures decline with altitude, but meteorological conditions across the Great Basin set up a situation in which the opposite occurred for a prolonged period. For much of January, temperatures increased with altitude, creating what’s known as a temperature inversion. During the peak of the inversion, temperatures were -15.5ºC (4.1ºF) at the surface and 7.6ºC (45.7ºF) at 2,130 meters (6,988 feet), University of Utah meteorologist Jim Steenburgh reported on his blog.

The result? The layer of warm air on top functioned like a lid, trapping pollutants in the valleys and preventing winds from dispersing the haze. The photograph above, taken by Roland Li on January 19, shows the sharp boundary an inversion can create.

Air quality problems caused by temperature inversions are not unique to Salt Lake City.  Cities in California (including Bakersfield, Fresno, Hanford, Los Angeles, Modesto) and Pittsburgh, Pennsylvania, have some of the most severe short-term particle pollution problems in the nation, according to a list published by the American Lung Association.

Such events are a reminder that the United States is not immune from dangerous spikes in particulate pollution, though particulate levels don’t get as high as they do in some other parts of the world. (See this map of fine particulates for a global perspective.) Eastern China, for example, recently suffered through an extreme pollution event that was far more potent than the Salt Lake City event.

More:
Utah Department of Environmental Quality: About Inversions

Salt Lake City Office of the National Weather Service: What Are Weather Inversions?
Jeff Masters: Dangerous Air Pollution Episode in Utah

On September 16, 2012, the extent of sea ice in the Arctic Ocean dropped to 3.41 million square kilometers (1.32 million square miles). The National Snow and Ice Data Center (NSIDC) issued a preliminary announcement on September 19 noting that it was likely the minimum extent for the year and the lowest extent observed in the 33-year satellite record.

This year’s Arctic sea ice minimum was 760,000 square kilometers (290,000 square miles) below the previous record set on September 18, 2007. (For a sense of scale, that’s an ice-area loss larger than the state of Texas.)

The previous record minimum, set in 2007, was 4.17 million square kilometers, or 1.61 million square miles. The September 2012 minimum was 18 percent below the 2007 minimum, and 49 percent below the 1979–2000 average minimum.

Graph courtesy National Snow and Ice Data Center.

Arctic sea ice typically reaches is minimum extent in mid-September, then begins increasing through the Northern Hemisphere fall and winter. The maximum extent usually occurs in March. Since the last maximum on March 20, 2012, the Arctic has lost a total of 11.83 million square kilometers (4.57 million square miles) of sea ice.

NSIDC uses a five-day running average for sea ice extent, and calculations of sea ice extent have been refined from those of previous years. For more information on Arctic sea ice in 2012, visit NSIDC’s Arctic Sea Ice News and Analysis blog and the NASA news release on the observation. To learn more about sea ice basics, see the sea ice fact sheet.

Earth Indicator: 4 million

August 30th, 2012 by Michon Scott

This week’s Earth Indicator is 4 million…as in 4 million square kilometers. It’s a number that scientists studying sea ice never thought they would see.

Every year, the sea ice at the top of our planet shrinks and grows with the seasons. But because ocean water lags behind the atmosphere in warming up and cooling down, Arctic sea ice reaches its maximum extent in March, and its minimum in September.

Since 1979, satellites have observed Arctic sea ice shrinking and growing, and scientists at the National Snow and Ice Data Center (NSIDC) have recorded the minimum and maximum extents, adding them to a growing archive of data about Earth’s frozen places. In tracking sea ice, NSIDC uses a five-day average to smooth out bumps in the daily sea ice extents. The numbers have been refined over time with the adoption of new processing and quality control measures, but the ballpark numbers have remained the same.

Ten years ago, NSIDC scientists noted that Arctic sea ice set a new record minimum at 5.64 million square kilometers on September 14, 2002. New records followed: 5.32 million square kilometers on September 22, 2005; and 4.17 million square kilometers on September 18, 2007. In August 2007, Arctic sea ice extent dropped so rapidly that it set a new record before the usual end of the melt season. (Researchers at the University of Bremen declared a new record in 2011, based on slightly different calculation methods.)

In the summer of 2012, not only was the 2007 record broken, but within days, the passed another barrier. As of August 28, 2012, the five-day running average of Arctic sea ice fell below 4 million square kilometers (1.54 million square miles). NSIDC director Mark Serreze remarked: “It’s like breaking the four-minute mile. Nobody thought we could do that, but here we are.”

Arctic sea ice extent graph. Courtesy National Snow and Ice Data Center.

But whereas the four-minute mile was a milestone of human achievement, 4 million square kilometers of sea ice is not so inspiring. “Arctic sea ice extent is declining faster than it used to, and we can expect a continuing decline overall, with some inter-annual variability,” said NSIDC lead scientist Ted Scambos. “We have a less polar pole.”

The low sea ice extent for 2012 fits within a larger pattern. For the past several years, September Arctic sea ice extent numbers have remained well below the 1979–2000 average of 6.70 million square kilometers. The low extent from 2012 is no outlier.

Arctic sea ice minimum extents (million square kilometers)

September 22, 2005: 5.32

September 17, 2006: 5.78

September 18, 2007: 4.17

September 20, 2008: 4.59

September 13, 2009: 5.13

September 21, 2010: 4.63

September 11, 2011: 4.33

Reflecting on sea ice extent falling below 4 million square kilometers, NSIDC scientist Walt Meier remarked: “It’s just a number, but it signifies a changing Arctic.”

Earth Indicator: 3σ

August 7th, 2012 by Adam Voiland

Is that a three omicron? Nope. Three, rho? Strike two. Our latest Earth Indicator is three-sigma.

In Greek, sigma (σ) is the 18th letter of the alphabet. In statistics, it’s a symbol for standard deviation, a measure of how spread out a set of data points are from the average (which is often called the mean by statisticians). Data with a low standard deviation indicates that the data points are bunched up and close to the mean. A high standard deviation indicates the points are spread over a wide range of values.

In a standard bell curve, most data points (68 percent) fall within one standard deviation (1σ) of the mean (see the pink section in the graph below). The vast majority (95 percent, the combined pink and red sections of the graph) fall within two standard deviations (2σ). An even higher percentage (99.7 percent, the combined pink, red, and blue sections) fall within three standard deviations (3σ) of the mean. Just a tiny fraction of points are outliers that are more than three standard deviations from the mean. (See the parts of the graph with arrows pointing to 0.15%).

Now imagine that instead of generic data points on a generic bell curve the values are actually measurements of summer temperatures. That will give you a foundation for understanding the statistical analysis that James Hansen published this week in the Proceedings of the National Academy of Sciences.

One of his main findings: as seen in the graph above, is that the range of observed surface temperatures on Earth has shifted over time, meaning that rare 3-sigma temperature events (which represent times when temperatures are abnormally warm) have grown more frequent because of global warming.

Here’s how Hansen puts it:

We have shown that these “3-sigma” (3σ) events, where σ is the standard deviation — seasons more than three standard deviations removed from “normal” climate — are a consequence of the rapid global warming of the past 30 years. Combined with the well-established fact that the global warming is a result of increasing atmospheric CO2 and other greenhouse gases, it follows that the increasingly extreme climate anomalies are human-made.

Want to learn more? A press release is available here. And Hansen has also published a summary of his paper (pdf) and an accompanying Q & A (pdf) that offer more details.

 

This Week’s Earth Indicator: 90

May 8th, 2012 by Adam Voiland

Credit: NASA

We bring you this week’s indicator—90—with a sigh.

Ninety is the combined number of Earth-observing instruments on NASA and NOAA satellites that are currently monitoring our planet. And that number is about to plunge, according to a National Research Council report released in May 2012. By 2020, there could be less than 20 instruments in orbit, and the total number of missions is expected to fall from 23 to just 6.

Many of the these space-based instruments aren’t exactly household names. (MODIS, anyone? ASTER or ALI? AMSU-A or SORCE?)  Still, they are our eyes and and ears on the planet, as indispensable to understanding how it works and changes as our human senses are to navigating life on the surface. Without these satellites, the United States would be blind to most Earth systems, unable to effectively monitor the effects of global warming and the constant parade of volcanic eruptions, wildfires, droughts, dust storms, hurricanes, crop health, air pollution.

The NRC study authors mince few words in explaining what the reduction would mean:

These precipitous decreases warn of a coming crisis in Earth observations from space, in which our ability to observe and understand the Earth system will decline just as Earth observations are critically needed to underpin important decisions facing our nation and the world. Advances in weather forecast accuracy may slow or even reverse, and gaps in time series of climate and other critical Earth observations are almost certain to occur.  When these long-running data streams fall silent, users requiring these observations will go unsupported, and progress toward understanding the Earth system and how it supports life may stagnate.

It’s worth noting that the committee only counted missions that have been officially proposed, funded, and given a launch date. It did not include missions that will likely come to fruition but have not yet been fully funded (the successor mission to GRACE, for example). That means the future fleet might not be quite as small as feared, but even the most optimistic estimates indicate a major decline in observing capability.

Media outlets including Wired and EarthSky have more details. The Washington Post wrote an editorial on the subject. NASA Chief Scientist Waleed Abdalati (video below) explains the importance of Earth-observing satellites and points out that most of those in orbit are already past their design lives and living on “borrowed time.”

This Week’s Earth Indicator: 76

April 11th, 2012 by Adam Voiland


This map, based on data from the Moderate Resolution Imaging Spectroradiometer (MODIS), shows average aerosol amounts around the world for March 2012. An optical thickness of less than 0.1 ( palest yellow) indicates crystal clear sky with maximum visibility, whereas a value of 1 (reddish brown) indicates very hazy conditions.


This week’s indicator: 76. No, that is not a reference to the return time of Halley’s Comet (76 years) or the atomic number of the world’s densest natural element (the metal osmium). In this case, 76 is a percentage.  And it’s a particular percentage that represents how much of the variability in North Atlantic sea temperatures new climate simulations attribute to small airborne particles called aerosols. British scientists at the Met Office Hadley Center ran the simulations, and Nature published the numberin a recent issue.

North Atlantic sea temperatures have gone through warm and cool phases over the last 150 years (a phenomenon called the Atlantic Multidecadal Oscillation, or AMO). The sea was cool, for example, during the 1900s–1920s and 1960s–1990s, while a warm phase occurred in the 1930s–1950s (see graph below). Since the mid-1990s, the North Atlantic has been in a warm phase.

The difference between average ocean surface temperatures over the North Atlantic and those over the global oceans has oscillated between cool and warm phases. Figure from the April 4, 2012 edition of Nature.

That may sound like arcane trivia, but the cycling of North Atlantic sea temperatures matter. Earlier research has linked its phase (warm or cool) to high-stakes weather events, such as the frequency of Atlantic hurricanes and drought in the the Amazon Basin and the Sahel. Cool phases, for example, have coincided with decreased rainfall in the Amazon, more Atlantic hurricanes, and increased rain in the Sahel.

Conventional wisdom has held that the cycling of North Atlantic sea temperature is a natural phenomenon driven by ocean currents. The new climate simulations suggest that aerosols are the real culprit. The British team considered a number of aerosol types in their analysis but the most important was one called sulfates, which come from volcanic eruptions and from humans burning fossil fuels.

The researchers used a state-of-the-art climate model to see if they could reproduce the changes in North Atlantic sea temperatures seen over the last 150 years. This wasn’t the first time scientists have tried this, but it was the first time any group did it so accurately. And the key to their success, the British team concluded, was that they incorporated better estimates of how aerosols affect clouds—something that most previous models omitted or only partially included.

How do aerosols (for the sake of simplicity, let’s just call it pollution for the moment) affect clouds and how does that affect sea surface temperature?  In short, pollution tends to brighten clouds (see illustration above) causing the clouds to reflect more light back to space and cool the sea.

Clouds in clean air are composed of a relatively small number of large droplets (left). As a consequence, the clouds are somewhat dark and translucent. In air with high concentrations of aerosols, water can easily condense on the particles, creating a large number of small droplets (right). These clouds are dense, very reflective, and bright white. This influence of aerosols on clouds is called the “indirect effect.” NASA image by Rob Simmon.

After taking this indirect aerosol effect into account, the British team’s simulations suggested that the majority (the number they came up with was, of course, 76 percent) of the observed variability in sea temperatures seen since 1860 was due to cooling caused by sulfates from volcanic eruptions and from the buildup of industrial pollution. The results of the simulation also imply that sea  temperatures have risen in recent decades because clean air regulations passed in the United States and Europe in the 1960s and 1970s have reduced levels of air pollution.

If the new simulation is right, it would be a big deal. It would not only mean, as a Nature News & Views article pointed out, that humans are the key factor driving changes in sea surface temperatures (and that cleaning up the air could be fueling hurricanes); it would also mean that the Atlantic Multidecadal Oscillation (AMO) doesn’t really exist.

Before you start mourning its death, however, realize there are some indications that this latest simulation may not be right. Understanding how aerosols affect clouds remains a young science, and the British team may have made some incorrect assumptions about how aerosols affect particular types of clouds. Plus, the model didn’t reproduce changes in the frequency of outbreaks of African dust storms, something that can affect the temperature of the tropical Atlantic. On top of all that, a number of other studies have come to very different conclusions.

Bottom line: stay tuned. Things can get messy at the cutting-edge of science, but we’ll be keeping our eye out to see if and how long the 76 percent number holds.

Science is full of numbers and here at the Earth Observatory we know they can sometimes be contradictory and confusing. In our new Earth Indicator column, we’ll pick a number from the many floating around in the science or popular press, unpack where it came from, and explain what it means. Also, a tip of the hat to NPR’s Planet Money team. They have a Planet Money indicator on their podcast that we like so much we decided to steal the name.