Ensuring the accuracy of Earth’s long-term global and regional surface temperature records is a challenging, constantly evolving undertaking.
There are lots of reasons for this, including changes in the availability of data, technological advancements in how land and sea surface temperatures are measured, the growth of urban areas, and changes to where and when temperature data are collected, to name just a few. Over time, these changes can lead to measurement inconsistencies that affect temperature data records.
Scientists have been building estimates of Earth’s average global temperature for more than a century, using temperature records from weather stations. But before 1880, there just wasn’t enough data to make accurate calculations, resulting in uncertainties in these older records. Fortunately, consistent temperature estimates made by paleoclimatologists (scientists who study Earth’s past climate using environmental clues like ice cores and tree rings) provide scientists with context for understanding today’s observed warming of Earth’s climate, which has no historic parallel.
Over the past 140 years, we’ve literally gone from making some temperature measurements by hand to using sophisticated satellite technology. Today’s temperature data come from many sources, including more than 32,000 land weather stations, weather balloons, radar, ships and buoys, satellites, and volunteer weather watchers.
To account for all of these changes and ensure a consistent, accurate record of our planet’s temperature variations, scientists use information from many sources to make adjustments before incorporating and absorbing temperature data into analyses of regional or global surface temperatures. This allows them to make “apples to apples” comparisons.
Let’s look more closely at why these adjustments are made.
To begin with, some temperature data are gathered by humans. As all of us know, humans can make occasional mistakes in recording and transcribing observations. So, a first step in processing temperature data is to perform quality control to identify and eliminate any erroneous data caused by such errors – things like missing a minus sign, misreading an instrument, etc.
Changes to Land Weather Stations
Next are changes to land weather stations. Temperature readings at weather stations can be affected by the physical location of the station, by what’s happening around it, and even by the time of day that readings are made.
For example, if a weather station is located at the bottom of a mountain and a new station is built on the same mountain but at a higher location, the changes in latitude and elevation could affect the station’s readings. If you simply averaged the old and new data sets, the station’s overall temperature readings would be lower beginning when the new station opens. Similarly, if a station is moved away from a city center to a less developed location like an airport, cooler readings may result, while if the land around a weather station becomes more developed, readings might get warmer. Such differences are caused by how ground surfaces in different environments absorb and retain heat.
Then there are changes to the way that stations collect temperature data. Old technologies become outdated or instrumentation simply wears out and is replaced. Using new equipment with slightly different characteristics can affect temperature measurements.
Data adjustments may also be required if there are changes to the time of day that observations are made. If, for example, a network of weather stations adopts a uniform observation time, as they did in the United States, stations making such a switch will see their data affected, because temperature is dependent on time of day.
Scientists also make adjustments to account for station temperature data that are significantly higher or lower than that of nearby stations. Such out-of-the-ordinary temperature readings typically have absolutely nothing to do with climate change but are instead due to some human-produced change that causes the station readings to be out of line with neighboring stations. By comparing data with surrounding stations, scientists can identify abnormal station measurements and ensure that they don’t skew overall regional or global temperature estimates.
In addition, since the number of land weather stations is increasing over time, forming more dense networks that increase the accuracy of temperature estimates in those regions, scientists also take those improvements into account so data from areas with dense networks can be appropriately compared with data from areas with less dense networks.
Changes to Sea Surface Temperature Measurements
Much like the trends on land, sea surface temperature measurement practices have also changed significantly.
Before about 1940, the most common method for measuring sea surface temperature was to throw a bucket attached to a rope overboard from a ship, haul it back up, and read the water temperature. The method was far from perfect. Depending on the air temperature, the water temperature could change as the bucket was pulled from the water.
During the 1930s and ‘40s, scientists began measuring the temperature of ocean water piped in to cool ship engines. This method was more accurate. The impact on long-term ocean surface temperature records was to reduce the warming trend in global ocean temperatures that had been observed before that time. That’s because temperature readings from water drawn up in buckets prior to measurement are, on average, a few tenths of a degree Celsius cooler than readings of water obtained at the level of the ocean in a ship’s intake valves.
Then, beginning around 1990, measurements from thousands of floating buoys began replacing ship-based measurements as the commonly accepted standard. Today, such buoys provide about 80% of ocean temperature data. Temperatures recorded by buoys are slightly lower than those obtained from ship engine room water intakes for two reasons. First, buoys sample water that is slightly deeper, and therefore cooler, than water samples obtained from ships. Second, the process of passing water samples through a ship’s inlet can slightly heat the water. To compensate for the addition of cooler water temperature data from buoys to the warmer temperature data obtained from ships, ocean temperatures from buoys in recent years have been adjusted slightly upward to be consistent with ship measurements.
So Many Climate Data Sets, So Little Disagreement
Currently, there are multiple independent climate research organizations around the world that maintain long-term data sets of global land and ocean temperatures. Among the best known are those produced by NASA, the National Oceanic and Atmospheric Administration (NOAA), the U.K. Meteorological Office's Hadley Centre/Climatic Research Unit (CRU) of the University of East Anglia, and Berkeley Earth, a California-based non-profit.
Each organization uses different techniques to make its estimates and adjusts its input data sets to compensate for changes in observing conditions, using data processing methods described in peer-reviewed literature.
Remarkably, despite the differences in methodologies used by these independent researchers, their global temperature estimates are all in close agreement. Moreover, they also match up closely to independent data sets derived from satellites and weather forecast models.
NASA’s GISTEMP Analysis
One of the leading data sets used to conduct global surface temperature analyses is the NASA Goddard Institute for Space Studies (GISS) surface temperature analysis, known as GISTEMP.
GISTEMP uses a statistical method that produces a consistent estimated temperature anomaly series from 1880 to the present. A “temperature anomaly” is a calculation of how much colder or warmer a measured temperature is at a given weather station compared to an average value for that location and time, which is calculated over a 30-year reference period (1951-1980). The current version of GISTEMP includes adjusted average monthly data from the latest version of the NOAA/National Centers for Environmental Information (NCEI) Global Historical Climatology Network analysis and its Extended Reconstructed Sea Surface Temperature data.
GISTEMP uses an automated process to flag abnormal records that don’t appear to be accurate. Scientists then perform manual inspections on the suspect data.
GISTEMP also adjusts to account for the effects of urban heat islands, which are differences in temperatures between urban and rural areas.
The procedure used to calculate GISTEMP hasn’t changed significantly since the mid-1980s, except to better account for data from urban areas. While the growing availability of better data has led to adjustments in GISTEMP’s regional temperature averages, the adjustments haven’t impacted GISTEMP’s global averages significantly.
While raw data from an individual station are never adjusted, any station showing abnormal data resulting from changes in measurement method, its immediate surroundings, or apparent errors, is compared to reference data from neighboring stations that have similar climate conditions in order to identify and remove abnormal data before they are input into the GISTEMP method. While such data adjustments can substantially impact some individual stations and small regions, they barely change any global average temperature trends.
In addition, results from global climate models are not used at any stage in the GISTEMP process, so comparisons between GISTEMP and model projections are valid. All data used by GISTEMP are in the public domain, and all code used is available for independent verification.
The Bottom Line
Independent analyses conclude the impact of station temperature data adjustments is not very large. Upward adjustments of global temperature readings before 1950 have, in total, slightly reduced century-scale global temperature trends. Since 1950, however, adjustments to input data have slightly increased the rate of global warming recorded by the temperature record by less than 0.1 degree Celsius (less than 0.2 degrees Fahrenheit).
A final note: while adjustments are applied to station temperature data being used in global analyses, the raw data from these stations never changes unless better archived data become available. When global temperature data are processed, the original records are preserved and are available to anyone who wants them, at no cost, online. For example, the NOAA National Climatic Data Center's U.S. and global records may be accessed here.
Recently, an international research team published a comprehensive review in the journal Reviews of Geophysics on our state of understanding of Earth's "climate sensitivity," a key measure of how much our climate will change as greenhouse gas emissions increase. Essentially, by narrowing the range of estimates, the researchers found that climate sensitivity isn’t so low that it should be ignored, but it’s also not so high that there is no hope for the planet’s recovery.
We asked the two NASA authors on the study — Kate Marvel, jointly of Columbia University in New York and NASA’s Goddard Institute of Space Studies (GISS) in New York; and GISS Director Gavin Schmidt — to discuss their roles in the study and its significance for understanding the impacts of our warming world on climate.
Q. What exactly is climate sensitivity and why is it important to know its true value?
Schmidt: “We know from studies of the past that Earth’s climate can change dramatically. The evidence shows that the amount of greenhouse gases in the atmosphere can vary over time and make a big difference to the climate. Scientists try to quantify that by estimating how much the surface air temperature, averaged over the whole globe, would change if we doubled the amount of one typical but specific greenhouse gas – carbon dioxide. That number, called climate sensitivity, has quite a wide uncertainty range, and that has big implications for how serious human-made climate change will be.”
Q. Your team was able to narrow the range of estimates of Earth's climate sensitivity by more than 43 percent, from the previously accepted range of 1.5 to 4.5 Kelvin first established in 1979 (roughly 3 to 9 degrees Fahrenheit), to a narrower range of 2.6 to 3.9 Kelvin (roughly 4.5 to 7 degrees Fahrenheit). Why is it important for scientists to narrow this range of uncertainty? What does it mean in practical terms to be able to reduce uncertainties in measuring climate sensitivity?
Schmidt: “Scientists would like to reduce that uncertainty so that we can have more confidence in how we need to mitigate and adapt to future changes. For instance, how much sea level might rise, or how heat waves will get worse, or rainfall patterns change, are tied to the climate sensitivity combined with our actions in changing the atmosphere. A higher climate sensitivity would mean we would have to do more to avoid big changes, while a lower value would mean we’d have more time to adapt. It’s useful to note that we expect to reach double carbon dioxide levels later this century, and that while a few degrees might not seem like much, it's a big deal for the planet. The difference between forests beyond the Arctic Circle or glaciers extending down to New York City is only a range of about 8 K (about 14 degrees Fahrenheit) in the global average, while it changes sea level by 150 meters (more than 400 feet)!”
Q. How can better estimates of climate sensitivity impact policy decisions?
Marvel: “The most important thing about climate sensitivity is that it's not zero. Increasing atmospheric carbon dioxide definitely makes it warmer and increases the risk of extreme weather like drought, downpours, and heat waves. But better estimates of climate sensitivity are important for motivating action. Our results show that it would be foolish to rely on nature to save us from climate change — we don't think it's likely that sensitivity is low. But conversely, it's unlikely that climate sensitivity is so high as to make action pointless.”
Schmidt: “I’m not sure that our policy decisions are that finely tuned to the science of climate sensitivity other than knowing that climate really is sensitive to increasing greenhouse gases. Many climate policies are robust to those uncertainties, but many adaptation decisions will depend on knowing how bad things will get.”
Q. Why has it been so difficult over the past 40 years to narrow this range? What made this new estimate possible?
Schmidt: “There are three main reasons why this has been difficult. First, knowledge of past climate change has been difficult to quantify in globally coherent ways. Of course, we have known about the ice ages for a century or more, but getting accurate estimates of the global changes in temperature, greenhouse gases, and ice sheets has taken time and has needed many scientists working on many different aspects of the problem to come together. Second, the climate change signal has taken time to come out of the ‘noise’ of normal variability. In the 1980s and 1990s, people were still arguing about whether the warming over the 20th century was significant, but with another 20 years of record-breaking temperatures, that has been very clearly shown. Third, our understanding of the processes in the climate that affect sensitivity — clouds, water vapor, aerosols, etc. — has improved immensely with the development of satellite remote sensing, and every decade we are producing better and more useful information. But as these lines of evidence have matured, the need to come up with new methods to tie them all together coherently has become acute — and that was the impetus for this roughly 4-year effort.”
Marvel: “Yes, and in modeling, clouds are some of the biggest wildcards. See go.ted.com/katemarvel.”
Q. What types of evidence did the team consider in reaching its conclusions? Where do the lines of evidence agree and disagree most substantially?
Schmidt: “There are three main sources of information: changes since the late 19th century that have been measured in real time, our understanding of physical processes (particularly clouds), and new and more complete information from periods in the paleoclimate record (the geological past) where the planet was significantly cooler or warmer than today. All of the lines of evidence are mostly commensurate, but specific issues mean that the recent record isn’t good at constraining the high-end values because of the imprecise role of aerosols, and paleoclimate change is less able to constrain the low end because of the uncertain nature of that data. Together, however, we can mostly rule those tails out.”
Q. What were a few of the most significant findings for each of the three lines of evidence studied (feedback processes, the historical warming record, and paleoclimate records)?
Marvel: “For a long time, many people thought that sensitivity estimates derived from paleoclimate — the far past — were incompatible with estimates derived from more recent observations. But there's a difference between a past climate state in which the planet has reached an equilibrium — a ‘new normal’ — and our current climate, where things are very much in flux and continuing to change. There is some uncertainty in just how different the future will look from what we're experiencing now — it's possible we're moving into a new world for which we don't have a recent analogue. And when we take that uncertainty into account in a rigorous way, we find that the far past and the near future may not be telling us such different things after all.”
Schmidt: “What was interesting was that by starting off with a view of climate sensitivity that was a little more sophisticated than people had used previously, we found that there was more coherence among the different lines of evidence than others had found, and since the information we are using really is very independent, that allowed us to narrow the uncertainty.”
Q. Your team used a "Bayesian approach" to calculate your estimated range of climate sensitivity. In layman's terms, what is that?
Schmidt: “A Bayesian approach is really just a mathematical representation of how we do science in general. We have an initial hypothesis, we get some evidence that may or may not support it, and then we update our understanding based on that evidence. And then we do it again (and again, and again, etc.). Over time, and as more evidence accumulates, we hopefully hone in on the most correct answer. Using Bayesian methods allowed us to pull together disparate threads of evidence in a coherent way — allowing for different degrees of confidence in each of the lines of evidence. What is great is that in the future, as more evidence is discovered, we can continue the process and update our understanding again.”
Q. What role did global climate models play in the team's findings?
Marvel: “Complex climate models are useful tools (see here for a good overview). But in this paper, we relied largely on observations: satellite and ground-based measurements of recent trends, paleoclimate datasets, and basic physical principles.”
Schmidt: “Climate models help frame the questions we are asking and can be examined to see how climate patterns in space and time connect to things we can directly observe. But we know that climate models have a lot of uncertainty related (for instance) to cloud processes, and so we didn’t use them directly to estimate sensitivity. You could, however, use our results to assess whether a climate model has a sensitivity that is within our independently constrained range.”
Q. Your new estimated range of Earth's climate sensitivity finds the value is around the mid-point of the previous estimate range rather than on the lower or higher end. What does that mean in practical terms for projections of Earth's global temperatures and Earth's climate in this century?
Schmidt: “It means that climate sensitivity is not so low that we can ignore it, nor is it so high that we should despair. Ultimately, it tells us that while human-made climate change is (and will continue to be) a problem, our actions as a society can change that trajectory.”
Q. How likely is it that Earth's climate sensitivity could be higher than 3.9 Kelvin? Lower than 2.6 Kelvin?
Schmidt: “There are subjective elements to the analysis we performed, and other people could decide to weight things a little differently. We explored some of these alternative choices and that broadens the uncertainty a little, but basically, we estimate that there is about a one-in-six chance that it was less than the low end, and one-in-six that it was higher than the high end. That’s not impossible, but, if true, then a lot of our assessments would have to be quite a ways off.”
Q. The concentration of carbon dioxide in Earth's atmosphere is currently around 414 ppm (parts per million). What are the projections for future carbon dioxide increases under the range of current emissions scenarios and how does having a better estimate of climate sensitivity improve our understanding of how our climate may change in the future?
Schmidt: “The future trajectory of carbon dioxide will depend on what we do as a society — if we decide to burn all the fossil fuels we can find, we could reach 900 ppm by the end of the century, but if we aggressively reduce emissions, we could stay below 500 ppm, maybe lower. The climate sensitivity tells us what we can expect in terms of temperature — between another 1 or 2 degrees Celsius (1.8 or 3.6 degrees Fahrenheit) for the low scenario, which would be very serious, to between 4 and 7 degrees Celsius (7.2 and 12.6 degrees Fahrenheit) for the high end scenario, which would be a disaster.”
Q. What about your study did you find most surprising?
Marvel: “How difficult it was to get everyone with all their different expertise working together on a big, joint effort. In the end, I think everyone realized how important it was and how this will be a strong basis for everyone’s future research.”
Schmidt: “How consistent the results were across all three different approaches.”
Q. What was your role in the study?
Marvel: “I was one of the lead scientists on the section looking at historical constraints on sensitivity, making sure that we took into account the differences in how things changed over the 20th century and how things will change going forward, and working to make sure that the uncertainties in historical climate records were properly included.”
Schmidt: “I worked mainly on the paleoclimate section, making sure that we used the most appropriate data from key periods in the planet’s history (like the last ice age or the last time carbon dioxide was as high as it is now — some 3 million years ago).”
Climate scientists will tell you a key challenge in studying climate change is the relative dearth of long-term monitoring sites around the world. The oldest continuously operating station — the Mauna Loa Observatory on Hawaii’s Big Island, which monitors carbon dioxide and other key constituents of our atmosphere that drive climate change — has only been in operation since the late 1950s.
This obstacle is even more profound in the world’s coastal areas. In the global open ocean, the international Argo program’s approximately 4,000 drifting floats have observed currents, temperature, salinity and other ocean conditions since the early 2000s. But near coastlines, the situation is different. While coastal weather stations are plentiful, their focus is to produce weather forecasts for commercial and recreational ocean users, which aren’t necessarily useful for studying climate. The relative lack of long-term records of surface and deep ocean conditions near coastlines has limited our ability to make accurate oceanographic forecasts.
A meteorological and oceanographic coastal station in the small Spanish coastal town of L’Estartit is a notable exception. Located in the Catalan Costa Brava region of the northwest Mediterranean Sea, the L’Estartit station has collected inland data on air temperature, precipitation, atmospheric pressure and humidity since 1969, and has also made oceanographic observations at least weekly since 1973. This makes L’Estartit the longest available uninterrupted oceanographic data time series in the Mediterranean. A new NASA-funded study presents a detailed analysis of the site, revealing climate trends for its Mediterranean coastal environment spanning nearly a half century.
“The long-term data set from L’Estartit is a treasure trove that’s useful for assessing the regional impacts of climate change and how it’s evolved over time."
The study, led by Jordi Salat of the Institut de Ciències del Mar (CSIC) in Barcelona, provides estimates of annual trends in sea and atmospheric temperature and sea level, along with seasonal trends. It also compares data from the site with previous and other estimates of climate trends in the region. Co-authors include Josep Pascual, also with CSIC; oceanographers Jorge Vazquez and Mike Chin of NASA’s Jet Propulsion Laboratory in Southern California; and Mar Flexas of Caltech, also in Southern California.
The Evolution of Modern Ocean Monitoring
The existence of the L’Estartit station reflects the results of decades of scientific research dating back to the 20th century. This body of work has established the vital role the ocean plays, in conjunction with our atmosphere, in shaping Earth’s global weather and climate. While sea level and sea state have been monitored regularly for some time, other measurements of oceanic conditions haven’t been as well-chronicled. In order to reconstruct the climate history of the ocean, scientists have typically relied on data from coastal tide gauges and stationary mooring stations, along with oceanographic cruises that weren’t generally part of any coordinated monitoring program.
By the 1980s, however, as Earth’s global climate warming trend became evident, scientists began to establish international programs to conduct long-term studies of the ocean. As a result, in recent years, scientists have increasingly acknowledged the value of having the oceanographic equivalent of weather forecasts. Maintaining regular, long-term records of air temperature, water temperatures at the surface and at various depths, winds, sea level, salinity, and other key oceanographic parameters gives scientists valuable information on long-term average values, how variable our climate is and on long-term changes and trends. Moreover, they help scientists better evaluate how humans are contributing to climate change.
Over the past 20 to 30 years, new technologies have given scientists the ability to monitor the ocean all the way from the sea surface to the ocean floor. These include satellites, drifters, gliders, moorings, buoys, Argo profilers and ship data. These data are used as inputs to computer models to estimate the state of the ocean, make ocean forecasts and estimate climate trends.
L’Estartit: Monitoring a Climate Hot Spot
Maintained by voluntary observer Josep Pascual in collaboration with CSIC and the authority of the marine protected area, the L’Estartit station is well positioned to monitor the Mediterranean, a region of our planet that’s significantly impacted by climate change. It lies at the southern end of a relatively narrow offshore continental shelf and along the coastal side of the Northern Current, the main along-slope ocean current in the northwestern Mediterranean.
You can think of the Mediterranean as sort of a miniature ocean, since most of the processes that take place in the global ocean also take place here, albeit at different time scales in some instances. Its relatively small size also makes it more accessible to monitoring than many other regions of the global ocean. Because it’s located in Earth’s mid latitudes, it experiences significant seasonal variations, which affect the way it exchanges heat with the atmosphere.
The L’Estartit site collects a broad array of oceanographic data. In addition to the data mentioned previously, the site began continuous measurements of potential daily evaporation in 1976; and has measured sea state, along with wind speed and direction, since 1988. With the installation of a tide gauge in the harbor in 1990, continuous sea level data have been collected. Also added in the 1990s were conductivity-temperature-depth (CTD) profiles and water samples to analyze the temperature and salinity of the water column.
L’Estartit’s long-term data record makes it possible for scientists to calculate trends for a variety of atmospheric and oceanic climate attributes, including air temperature, sea surface and sub-surface temperature to a depth of 80 meters (262 feet), air pressure, relative humidity, relative cloudiness, wind, salinity, changes in ocean stratification, estimates of favorable conditions for evaporation, sea level and precipitation.
“The long-term data set from L’Estartit is a treasure trove that’s useful for assessing the regional impacts of climate change and how it’s evolved over time,” said Vazquez. “The data can be used as reference for other areas in the Mediterranean. The strong agreement between the site’s measurements of sea surface temperatures and satellite data of sea surface temperatures demonstrates how L’Estartit can serve as both a long-term ground truth site to validate satellite observations and as a regional monitoring site for climate change.”
Vazquez says data from the site have been used in numerous climate research studies and have also been used to document a variety of extreme events, from cold spells and heat waves to storms.
A Half-Century of Climate Trends
The researchers’ analysis of the nearly 50-year data set reveals numerous climate trends. For example, air temperature has increased by an average of 0.05 degrees Celsius (0.09 degrees Fahrenheit) per year during this time. Sea surface temperature has increased by an average of 0.03 degrees Celsius (0.05 degrees Fahrenheit) per year, while the temperature of the ocean at a depth of 80 meters (262 feet) has increased by an average of 0.02 degrees Celsius (0.04 degrees Fahrenheit) per year.
While sea level in the Mediterranean decreased from the 1960s to the 1990s due to changes in the North Atlantic Oscillation (a multi-decadal cyclical fluctuation of atmospheric pressure over the North Atlantic Ocean that strongly influences winter weather in Europe, Greenland, northeastern North America, North Africa and northern Asia), it’s been on the rise since the mid-1990s. The L’Estartit data show that sea level at that site is currently rising at a rate of 3.1 millimeters (0.12 inches) per year.
The researchers found that some of the long-term climate trends they observed were more pronounced during some seasons than in others. For example, trends in air temperature and sea surface temperature were significantly stronger during spring, while the trend for ocean temperature at 80 meters was greatest during autumn. Among their other findings, they noted a small increase in the number of days per year that experience summer-like sea conditions. They also found an almost two day per year drop in conditions favorable for marine evaporation, which may be related to an observed decrease in springtime coastal precipitation.
Vazquez says the good statistical comparison between sea surface temperature values and trends from the L’Estartit data set and data from available satellite products is encouraging. “The long-term consistency of the direct measurements with our satellite data gives scientists the opportunity to validate climate trends across multiple decades,” he said. “Data from L’Estartit should serve as a wake-up call to the global climate science community to immediately begin similar initiatives and ensure their continuity over time.”
The L’Estartit data are available to the public free of charge. The digitized data are accessible at http://meteolestartit.cat/. The remote sensing data used in the study may be retrieved through NASA’s Physical Oceanography Distributed Active Archive Center (PO.DAAC) at http://podaac.jpl.nasa.gov.
When NASA climate scientists speak in public, they’re often asked about possible connections between climate change and extreme weather events such as hurricanes, heavy downpours, floods, blizzards, heat waves and droughts. After all, it seems extreme weather is in the news almost every day of late, and people are taking notice. How might particular extreme weather and natural climate phenomena, such as El Niño and La Niña, be affected by climate change, they wonder?
There’s no easy answer, says Joao Teixeira, co-director of the Center for Climate Sciences at NASA’s Jet Propulsion Laboratory in Pasadena, California, and science team leader for the Atmospheric Infrared Sounder (AIRS) instrument on NASA’s Aqua satellite. “Within the scientific community it’s a relatively well-accepted fact that as global temperatures increase, extreme precipitation will very likely increase as well,” he says. “Beyond that, we’re still learning.”
“Within the scientific community it’s a relatively well-accepted fact that as global temperatures increase, extreme precipitation will very likely increase as well. Beyond that, we’re still learning.”
While there’s not yet a full consensus on the matter, in recent years a body of evidence linking extreme weather with climate change has begun to emerge. Evidence from satellites, aircraft, ground measurements and climate model projections are increasingly drawing connections. Quantifying those interconnections is a big challenge.
“All our available tools have pros and cons,” says Teixeira. “Rain gauges, for example, provide good measurements, but they’re local and spread far apart. In contrast, satellites typically measure climate variables (such as precipitation, temperature and humidity) indirectly and don’t yet have long enough data records to establish trends, though that’s beginning to change. In addition, representing small-scale processes of the atmosphere that are key to extreme weather events in climate models, such as turbulence, convection and cloud physics, is notoriously difficult. So, we’re in a bit of a conundrum. But great progress is being made as more studies are conducted.”
A simple analogy describes how difficult it is to attribute extreme weather to climate change. Adding fossil fuel emissions to Earth’s atmosphere increases its temperature, which adds more energy to the atmosphere, supercharging it like an athlete on steroids. And just as it’s difficult to quantify how much of that athlete’s performance improvement is due to steroid use, so too it’s difficult to say whether extreme weather events are definitively due to a warmer atmosphere.
Are Supercharged Atlantic Hurricane Seasons a Case in Point?
Take hurricanes, for example. A hot topic in extreme weather research is how climate change is impacting the strength of tropical cyclones. A look at the 2019 Atlantic hurricane season provides a case in point.
After a quiet start to the 2019 season, Hurricane Dorian roared through the Atlantic in late August and early September, surprising many forecasters with its unexpected and rapid intensification. In just five days, Dorian grew from a minimal Category 1 hurricane to a Category 5 behemoth, reaching a peak intensity of 185 miles (295 kilometers) per hour when it made landfall in The Bahamas. In the process, Dorian tied an 84-year-old record for strongest landfalling Atlantic hurricane and became the fifth most intense recorded Atlantic hurricane to make landfall, as measured by its barometric pressure.
Two weeks later the remnants of Tropical Storm Imelda swamped parts of Texas under more than 40 inches (102 centimeters) of rain, enough to make it the fifth wettest recorded tropical cyclone to strike the lower 48 states. Fueled by copious moisture from a warm Gulf of Mexico, the slow-moving Imelda’s torrential rains and flooding wreaked havoc over a wide region.
Then in late September, Hurricane Lorenzo became the most northerly and easterly Category 5 storm on record in the Atlantic, even affecting the British Isles as an extratropical cyclone.
Earth’s atmosphere and oceans have warmed significantly in recent decades. A warming ocean creates a perfect cauldron for brewing tempests. Hurricanes are fueled by heat in the top layers of the ocean and require sea surface temperatures (SSTs) greater than 79 degrees Fahrenheit (26 degrees Celsius) to form and thrive.
Since 1995 there have been 17 above-normal Atlantic hurricane seasons, as measured by NOAA’s Accumulated Cyclone Energy (ACE) Index. ACE calculates the intensity of a hurricane season by combining the number, wind speed and duration of each tropical cyclone. That’s the largest stretch of above-normal seasons on record.
So while there aren’t necessarily more Atlantic hurricanes than before, those that form appear to be getting stronger, with more Category 4 and 5 events.
NASA Research Points to an Increase in Extreme Storms Over Earth’s Tropical Oceans
What does NASA research have to say about extreme storms? One NASA study from late 2018 supports the notion that global warming is causing the number of extreme storms to increase, at least over Earth’s tropical oceans (between 30 degrees North and South of the equator).
A team led by JPL’s Hartmut Aumann, AIRS project scientist from 1993 to 2012, analyzed 15 years of AIRS data, looking for correlations between average SSTs and the formation of extreme storms. They defined extreme storms as those producing at least 0.12 inches (3 millimeters) of rain per hour over a certain-sized area. They found that extreme storms formed when SSTs were hotter than 82 degrees Fahrenheit (28 degrees Celsius). The team also saw that for every 1.8 degrees Fahrenheit (1 degree Celsius) that SST increased, the number of extreme storms went up by about 21 percent. Based on current climate model projections, the researchers concluded that extreme storms may increase 60 percent by the year 2100.
Thanks to weather satellites, scientists have identified possible correlations between the extremely cold clouds seen in thermal infrared satellite images (called deep convective clouds) and extreme storms observed on the ground under certain conditions, especially over the tropical oceans. When precipitation from these clouds hits the top of Earth’s lowest atmospheric layer, the troposphere, it produces torrential rain and hail.
AIRS can’t measure precipitation directly from space, but it can measure the temperature of clouds with extraordinary accuracy and stability. Its data can also be correlated with other climate variables such as SSTs, for which scientists maintain long data records.
To determine the number of extreme storms, Aumann’s team plotted the number of deep convective clouds each day against measurements of sea surface temperature. They found that the number of these clouds correlated with increases in sea surface temperature.
The results of this study reflect a long line of AIRS research and three previously published papers. The researchers say large uncertainties and speculations remain regarding how extreme storms may change under future climate scenarios, including the possibility that a warming climate may result in fewer but more intense storms. But the results of this study point to an intriguing direction for further research.
What Lies Ahead?
Aumann is confident future studies will reveal additional insights into how severe storms detected as individual deep convective clouds coalesce to form tropical storms and hurricanes. He notes that if you look at these clouds over the global ocean, they frequently occur in clusters.
“AIRS sees hurricanes as hundreds of these clusters,” he said. “For example, it saw Hurricane Dorian as a cluster of about 150 deep convective clouds, while Hurricane Katrina contained about 500. If you look at a weather satellite image, you’ll see the severe storms that make up a hurricane are not actually contiguous. In fact, they’re uncannily similar to the stars within the spiral arms of a galaxy. It’s one severe thunderstorm after another, each dumping a quantity of rain on the ground.
“AIRS has 2,400 different frequency channels, so it’s a very rich data set,” he said. “In fact, there’s so much data, our computer capabilities aren’t able to explore most of it. We just need to ask the right questions.”
Periodically, we receive queries asking if Earth is cooling. Although multiple lines of converging scientific evidence show conclusively that our climate is warming, stories sometimes appear in the media calling that into question. New studies are interpreted as contradicting previous research, or data are viewed to be in conflict with established scientific thinking.
Last spring, for example, a number of media outlets and websites reported on a story that looked at data acquired from NASA’s Goddard Institute for Space Studies (GISS) Surface Temperature Analysis (GISTEMP), which estimates changes in global surface temperature. The article discussed a short-term cooling period that showed up in the data in 2017 and 2018 and correctly stated that short-term cooling cycles are “statistical noise compared to the long-term trend.”
Afterward, we received some queries from readers who wanted to know if this finding meant a significant period of global cooling either could be or already was under way.
The answer is no. This story is a great example of why focusing on just a short period of time – say, one, two or even several years — doesn’t tell you what’s really going on with the long-term trends. In fact, it’s likely to be misleading.
So, what’s really important to know about studying global temperature trends, anyway?
Well, to begin with, it’s vital to understand that global surface temperatures are a “noisy” signal, meaning they’re always varying to some degree due to constant interactions between the various components of our complex Earth system (e.g., land, ocean, air, ice). The interplay among these components drive our weather and climate.
For example, Earth’s ocean has a much higher capacity to store heat than our atmosphere does. Thus, even relatively small exchanges of heat between the atmosphere and the ocean can result in significant changes in global surface temperatures. In fact, more than 90 percent of the extra heat from global warming is stored in the ocean. Periodically occurring ocean oscillations, such as El Niño and its cold-water counterpart, La Niña, have significant effects on global weather and can affect global temperatures for a year or two as heat is transferred between the ocean and atmosphere.
This means that understanding global temperature trends requires a long-term perspective. An examination of two famous climate records illustrate this point.
You may be familiar with the Keeling Curve (above), a long-term record of global carbon dioxide concentrations. It’s not a straight line: The curve jiggles up and down every year due to the seasonal cycling of carbon dioxide. But the long-term trend is clearly up, especially in recent decades. As countries around the world rapidly develop and gross domestic products increase, human-produced emissions of carbon dioxide are accelerating.
During fall and winter in the Northern Hemisphere, when trees and plants begin to lose their leaves and decay, carbon dioxide is released in the atmosphere, mixing with emissions from human sources. This, combined with fewer trees and plants removing carbon dioxide from the atmosphere, allows concentrations to climb in winter, reaching a peak by early spring. During spring and summer in the Northern Hemisphere, plants absorb a substantial amount of carbon dioxide through photosynthesis.
Similarly, the above graph of long-term independent global temperature records maintained by NASA, NOAA and the UK’s Climatic Research Unit doesn’t show perfectly straight lines, either. There are ups and downs, and depending on when you start and stop, it’s easy to find numerous periods spanning multiple years where no warming occurred or when global temperatures even decreased. But the long-term trend is clearly up. To learn more about the relationship between carbon dioxide and other greenhouse gases and climate change, visit NASA’s Global Climate change website.
Growing Confidence in Earth Temperature Measurements
Scientists continue to grow increasingly confident that measurements of Earth’s long-term temperature rise in recent decades are accurate. For example, an assessment published earlier this year1 of the agency’s GISTEMP record of global temperatures found that NASA’s estimate is accurate to within less than one-tenth of a degree Fahrenheit in recent decades. They concluded that Earth’s approximately 1 degree Celsius (2 degrees Fahrenheit) global temperature increase since 1880 can’t be explained by any uncertainty or data error. The recent trends were also validated with data from the Atmospheric Infrared Sounder (AIRS) instrument on NASA’s Aqua satellite.
Global Warming Is 'Global'
What’s perhaps most important to remember about global surface temperature fluctuations is that despite short-term ups and downs, the evidence shows that our planet is steadily accumulating heat. Scientists assessing global warming study Earth’s entire heat content, not just what happens in one part of the atmosphere or one component of the Earth system. And what they have found is that the balance of energy in the Earth system is out of whack: Our lower atmosphere is warming, the ocean is accumulating more energy, land surfaces are absorbing energy, and Earth’s ice is melting.
A study by Church et al. (2011) found that since 1970, Earth’s heat content has risen at a rate of 6 x 1021 Joules a year. That’s the equivalent of taking the energy output of about 190,000 nuclear power plants and dumping it into the ocean every year.
Despite short-term decreases in global temperature, the long-term trend shows that Earth continues to warm.
- Lenssen, N., G. Schmidt, J. Hansen, M. Menne,A. Persin,R. Ruedy, and D. Zyss, 2019: Improvements in the GISTEMP uncertainty model. J. Geophys. Res. Atmos., early view, doi:10.1029/2018JD029522.