In loving memory of Larry Corp.
Goddard’s LiDAR, Hyperspectral, & Thermal Imager (G-LiHT) is an airborne instrument designed to map the composition of forested landscapes.
The G-LiHT instrument has a number of sensors that each serve a specific purpose. There are two LiDAR sensors that produce a series of LiDAR-derived forest structure metrics including a canopy height model, surface model, and digital terrain model. These models allow us to measure tree height and biomass volume.
Additionally, there are two cameras: one visible and one near-infrared (NIR). The visible and NIR bands acquired by the two cameras are paired to produce 4-band imagery. The 3-centimeter resolution photos taken by these cameras are aligned to build orthomosaics, which allow us to visually observe and identify changes in forest composition.
G-LiHT also has a hyperspectral sensor to acquire spectral information at a coarser resolution. These data can be used to identify vegetation composition and measure photosynthetic function as well as calculate vegetation indices at a fine spectral scale of 1 meter using radiometrically calibrated surface reflectance data.
The thermal sensor measures radiant surface temperature which allows us to create 3D temperature profiles derived from structure-for-motion. Thermal data provides us with information on the functional aspects of forest canopies. As photosynthetic function is related to evapotranspiration, we can observe that hotter canopies are more stressed relative to surrounding canopies.
The G-LiHT airborne mission supports multiple groups including the U.S. Forest Service (USFS), the USFS Geospatial Technology and Applications Center (GTAC), and the University of Alaska Anchorage.
The USFS is creating a forest inventory for the state of Alaska, and G-LiHT measurements collected over Forest Inventory and Analysis (FIA) plots are a cost-effective method of forest inventory. G-LiHT data will also help to improve regional estimates of aboveground forest biomass and terrestrial ecosystem carbon stocks. GTAC uses G-LiHT data measurements for algorithm development. USFS Geospatial Technology and Applications Center will use G-LiHT data acquired over FIA- and GTAC-measured ground plots and between these plots to map forest characteristics on federally managed lands, including forest type, biomass, vegetation structure, tree and shrub cover, and more. Data will also be used to guide future inventory efforts in coastal Alaska using methods developed for interior Alaska.
This field campaign also acquired repeat data over Fairbanks, Alaska, to measure changes in permafrost.
G-LiHT image data was reacquired over spruce beetle monitoring transects stretching from the Kenai Peninsula in the south to Denali National Park in the north. These transects were last measured on the ground and with G-LiHT in 2018, during the peak of a spruce beetle outbreak, and changes in vegetation structure and spectral reflectance will be used to evaluate the long-term mortality and growth of these forests.
Our Alaskan field campaign started with an integrative test flight in June. Our team of three loaded up G-LiHT into a vehicle much too small and drove to Dynamic Aviation in Bridgewater, Virginia. We spent the first day installing the instrument into a 1960s King Air A90.
The second day was all about flying. We needed to make sure G-LiHT didn’t interfere with any of the aircraft’s systems. Additionally, the functional test flight over Harrisonburg, Virginia, allowed us to verify that G-LiHT was functioning properly. We flew in a grid pattern over the city which allowed us to geospatially align the data products from all of G-LiHT’s sensors.
The integrative test flight was a success. We installed G-LiHT properly with no issues and obtained the information we needed. Once we received the thumbs up to proceed with our campaign, the pilots loaded up the plane with supplies and headed out to Kodiak, where we would meet them the following week.
Our plan for the field campaign was to arrive in Kodiak, Alaska on July 6 and stay until the end of the month. We chose Kodiak as our hub because it was a convenient location to our flight lines. Unfortunately, despite the ideal location, poor weather prevented us from flying for the first three days of the campaign.
Once we were finally able to get in the air, we collected data over the forests near Iliamna.
Most of our days consisted of our team meeting in the hotel for breakfast at 8 a.m., discussing weather and flight plans for the day, and then driving to the airport to prepare the plane and G-LiHT for flying. Depending on how many flight lines we were able to complete, we often stopped in King Salmon or Iliamna to refuel the plane and then went back out to fly more lines before returning to Kodiak.
Our group was interested in measuring the effects of forest fires on vegetation in the Dillingham region. There were several burned areas to the west of the Nuyakuk River and east of Cook Inlet.
Toward the end of the campaign, we decided to transit to Fairbanks because the weather over the rest of our other flight lines didn’t look promising. If there were clouds below the plane at 1,100 feet, they would obstruct the instrument’s view and cast shadows on our data. We had to closely monitor the weather every morning. Additionally, we were unable to fly in rain or smoke as it would adversely affect the LiDAR sensors’ data returns.
One geological feature we saw extensively in the southwest was the oxbow lake. Also called cut-off lakes, these lakes have formed when meandering rivers erode at points of inflection because of sediments flowing along them to the point where two parts of the river will join together, creating a new straight part of the river—essentially “cutting off” the curved lake piece. This created an oxbow lake. Once the lake has fully dried out, it becomes a meander scar. We noted the difference in vegetation growing back within the oxbow lakes and meander scars and how this differs from surrounding vegetation patterns.
We had only planned to spend one night in Fairbanks, then transit back to Kodiak the following day. However, the weather had other plans for us. We ended up having to fly to Anchorage the following day because of extremely low cloud ceilings in Kodiak that made it too dangerous to land there. It worked out in the end, and the team was able to see more of beautiful Alaska and collect data over Anchorage and the Chugach region. It just goes to show how quickly things can change during a field campaign.
We collected data in the Campbell Creek region west of Anchorage. The data include visible and near-infrared photos which were composited into 4-band high-resolution orthomosaics and used to visually observe and identify changes in forest composition.
In addition to the high-resolution orthomosaics produced from the G-LiHT’s near-infrared and visible cameras, LiDAR data was processed to create various 1-meter resolution forest structure metrics including Digital Terrain Model (DTM), Digital Surface Model (DSM) and Canopy Height Model (CHM). These metrics are used to measure tree height and biomass volume. The CHM raster below was created by subtracting the DTM from the DSM.
After collecting data in Anchorage and the Chugach region of Alaska, the team flew back to Kodiak and finished data acquisition in the southwest.
And of course it wouldn’t be Alaska without some wildlife. The day before leaving Kodiak, I got to see not just one bear—but a family of four! Cars were honking to scare the bears out of the road, but luckily I had enough time to snap a picture before the bears ran off into the woods. It was the perfect end to an exciting field campaign.
It’s been six years since the CYGNSS constellation was launched. Over that time, it has grown from a two-year mission measuring winds in major ocean storms into a mission with a broad and expanding variety of goals and objectives. They range from how ocean surface heat flux affects mesoscale convection and precipitation to how wetlands hidden under dense vegetation generate methane in the atmosphere, from how the suppression of ocean surface roughness helps track pollutant abundance in the Great Pacific Garbage Patch to how moist soil under heavy vegetation helps pinpoint locust breeding grounds in East Africa. Along with these scientific achievements, CYGNSS engineering has also demonstrated what is possible with a constellation of small, low cost satellites.
As our seventh year in orbit begins, there is both good news about the future and (possibly) bad news about the present. First the bad news. One of the eight satellites, FM06, was last contacted on 26 November 2022. Many attempts have been made since then, but without a response. There are still some last recovery commands and procedures to try, but it is possible that we have lost FM06. The other seven FMs are all healthy, functioning nominally and producing science data as usual. It is worth remembering that the spacecraft were designed for 2 years of operation on orbit and every day since then has been a welcomed gift. I am extremely grateful to the engineers and technicians at Southwest Research Institute and the University of Michigan Space Physics Research Lab who did such a great job designing and building the CYGNSS spacecraft as reliably as they did. Let’s hope the current constellation continues to operate well into the future.
And finally, the good news is the continued progress on multiple fronts with new missions that build on the CYGNSS legacy. Spire Global continues to launch new cubesats with GNSS-R capabilities of increasing complexity and sophistication. The Taiwanese space agency NSPO will be launching its TRITON GNSS-R satellite next year, and the European Space Agency will launch HydroGNSS the year after. And a new start up company, Muon Space, has licensed a next generation version of the CYGNSS instrument from U-Michigan and will launch the first of its constellation of smallsats next year.
The CYGNSS team will continue to operate its constellation, improve the quality of its science data products, and develop new products and applications for them, with the knowledge that what we develop now will continue to have a bright future with the missions yet to come. Happy Birthday, CYGNSS!
Alex Haughton is a graduate student in the Astrophysical and Planetary Sciences department at University of Colorado Boulder studying ultraviolet instrumentation with sounding rockets. His team has traveled to Equatorial Launch Australia’s Arnhem Space Center near Nhulunbuy, Australia to launch the Dual-channel Extreme Ultraviolet Continuum Experiment (DEUCE) Sounding Rocket and observe the stars Alpha Centauri A & B in extreme ultraviolet wavelengths.
The light of a waxing gibbous illuminates the eucalyptus forests of East Arnhem Land. A mild breeze rustles the trees, and clouds pass overhead, first obscuring and then revealing Alpha Centauri A & B, our targets. A Hunstman spider we named “Jeremy,” recently displaced from its position guarding the detonator switch, scuttles along the ground near where the DEUCE payload stands vertical, perhaps moments away from being launched into space.
30,000 feet above this scene, a weather balloon floats, taking wind measurements and beaming them back down to the Range Control Center. The countdown is approaching T-minus two minutes, and range safety officer Brittany Empson concentrates on the wind readings. While the overall velocity of the winds has been low enough to continue, the variability of the winds this evening has been too high to launch – the tail end of a storm over Indonesia just clipping the Gove Peninsula where the launch range stands. Unless the variability goes down, Brittany will have to call for a hold, resetting the countdown to T-minus three minutes. Five seconds before she must make the call, the variability number drops into the green.
“RSO Check Item 140,” she announces. She continues watching the data come in – if it goes red anytime in the next two minutes, the count must be reset. Next to her, campaign manager Max King begins polling various people to “Report GO Status.” There’s a rhythm to it, exactly as you imagine from the movies:
“ACS?”
“Go.”
“CUF?”
“Go.”
“NFORSe?”
“Go.”
“PTM?”
“Go.”
And so on and so forth. A couple hundred meters away from the Range Control Center in the Command Uplink Facility (CUF), where the science team sits, chills run down my spine, and I do my best not to get emotional. Our team has been building to this moment for at least the past year, and more personally this night represents a dream come true for me. I’ve watched the two previous payloads launch, and while both of them were awesome, this time I’m in the room, and I have things to do. It’s real.
Dr. Brian Fleming, the principal investigator (the one who proposed the project and won funding to make it happen) announces “Go” for us. Also in the room are Emily Witt, the senior graduate student on the launch, Alex Sico, our technician, and Chris “Hox” Hoxworth, who runs the uplink equipment in the CUF and whose calming voice has guided many a graduate student through this stressful situation over the year. With 30 seconds left in the countdown, we scurry outside the CUF; our next responsibilities are one minute after launch.
We can’t see the rocket from our vantage point, as it is hidden behind trees, but we know in which direction it lies. The speakers on the range count out the last ten seconds, and the night seems to hold its breath in anticipation: “Ten, nine, eight, seven, six, five, four, three, two, one…”
The light comes first, a bright flare revealing the trees and the trucks, trailers and radar dishes, and then just as the top of the payload crests the trees the resounding wall of sound crashes into our ears.
BOOM.
The rocket flies up, up, and away, the first stage burning through its fuel in a mere six seconds before silence engulfs the night again and only an echo, perhaps real, perhaps imaginary, of that first guttural roar is left bouncing through my head. Emily and Brian quickly dash back in to get to their stations, but I linger a few seconds longer and catch the second stage of the rocket igniting before following them.
I check the screen I am assigned to watch: the numbers look not good but great – the temperatures and voltages are as they should be, even the vacuum gauge is reading surprisingly low. By 77 seconds into the flight the vacuum gauge finally starts increasing slightly before the shutter door opens and it drops back to zero. At this point the payload has separated from the two rocket motors and the Attitude Control System (ACS) begins pointing us towards our star. It appears on the touchscreen Emily uses to steer, and she leans into her screen, eager to move it to the correct place. She can’t quite yet, though: She has to wait for the ACS engineer, Brittany Barrett, to give her the go ahead. Excruciating seconds pass.
“ACS on target and ready for uplinks,” reports Ms. Barrett.
Emily quickly begins steering the telescope’s slit to align with the star, fingers flying over the screen: Target, Set, Send. Data is now coming in on the detector screen, and Brian begins reading out count numbers: more is better.
“132… 214… 189… 488… 603… 488… 603,” he reports. On the screen Emily has the star roughly aligned with the slit, but our pointing is a bit bouncy, so she is consistently nudging it back into line. Finally, a few minutes in, it largely settles where it is supposed to. On the data screen we watch as photons hit the detector, concentrating on the area where there should be spectral lines, the indication that we are getting data from the star and that the data will have interesting science results.
“There!” exclaims Brian, his finger indicating a region with more counts. “That’s a line. We have data!” There is still some tension in the room: Emily continues to nudge the star to get the best results, and Brian continues hunting for other obvious lines, but relief is starting to set in. We are getting what we came here for.
Seven minutes and six seconds after takeoff the shutter door closes. The payload reached its apogee of 258 kilometers and is falling back down to Earth. It will launch a parachute and hopefully land gently (some landings are gentler than others). We leave the CUF to head for the RCC, handshakes, hugs, and general celebration. Somewhere, Jeremy the Hunstman spider crawls away, seeking a hunting location unoccupied by space goers. The work isn’t done, but the scary part is. Success.
This is it. Launch night.
Sounding rocket scientists are the space cowboys of NASA. Satellite missions like the James Webb Space Telescope (JWST) do incredible science but are much more expensive and take much longer (almost $10 billion and 30 years) to build and launch into space. One of the main reasons for that is that JWST is designed to not fail. It will be in space taking data for over ten years, and therefore a huge amount of testing and redundancy is needed to ensure reliability and longevity.
Sounding rockets, on the other hand, take on a much larger amount of risk. I haven’t yet attended a failed sounding rocket launch, but stories percolate throughout the community. Once a rocket went off track so the self-destruct system (yes, this is a real thing in sounding rocket launches at White Sands Missile Range) was activated. Another time lightning struck a payload as it was sitting on the ground and it flew horizontally through a wall. Less dramatically, it’s possible for the telescope to go out of focus on launch, or for the star tracking system to fail and us to not be able to find the target in the five minutes allotted to us.
That’s right, five minutes. A sounding rocket flight takes only 15 to 20 minutes, of which about five are used to collect science data. Once that window opens up, the graduate student uses a computer-aided guidance system to steer the intended target into the telescope. The time it takes to do that is seconds shaved off of the time collecting data. If a student takes 30 seconds, that’s a tenth of the potential data lost. So the pressure to perform is real.
In return for more risk and shorter flight times, sounding rockets provide a fast and cheap route to prove the viability of cutting-edge instrumentation. A sounding rocket program costs about $1 million a year to run and launches a rocket about once a year. When you put a new instrument on a project like JWST, you want proof that that technology will work in space as advertised. Sure, there are vacuum chambers and radiation chambers and other laboratory equipment on the ground that attempt to mimic the conditions of space, but none of them are as good a test as going there. Sounding rockets go there and use the latest and greatest to do new science. If the instrument (or rocket) fails, we can rebuild and try again at no great cost. If it succeeds, the instrument has been proven to work in space, and can now be used on bigger but lower risk projects like JWST.
So as we prepare to launch the Dual-channel Extreme Ultraviolet Continuum Experiment (DEUCE) tonight to observe Alpha Centauri A & B, there is a real chance that things go wrong. Everyone involved has done this before—it’s a levelheaded crew steering this operation. But the jitters are real. Catch you on the flip side.
Alex Haughton is a graduate student in the Astrophysical and Planetary Sciences department at University of Colorado Boulder studying ultraviolet instrumentation with sounding rockets. His team has traveled to Equatorial Launch Australia’s Arnhem Space Center near Nhulunbuy, Australia to launch the Dual-channel Extreme Ultraviolet Continuum Experiment (DEUCE) Sounding Rocket and observe the stars Alpha Centauri A & B in extreme ultraviolet wavelengths.
We learned in school that plants take in CO2 and water and use light to drive photosynthesis to grow. But what you may not know is that as part of the process of photosynthesis plants also emit light, called chlorophyll fluorescence. The fluoresced light provides information about the rate of photosynthesis and plant responses to stress. Although the fluoresced light is very dim, we can use sensitive instruments to measure that fluorescence, and this can be done even with satellite instruments in space.
Arctic tundra is the coldest ecological community. It circles the high latitudes of the northern hemisphere which is experiencing strong climate changes that affects the growth of tundra plants. Much of the tundra regions are remote and very hard to reach, so it is difficult for us to know just how tundra is responding to climate change. Satellites flying overhead can provide information about the tundra across the entire region, even in all of those difficult to reach places. In our project, “clarifying linkages between canopy SIF and physiological function for high latitude vegetation,” we want to learn how to use the fluorescence signal to describe the functioning of the tundra ecosystem so that we can understand how the diverse tundra vegetation is responding to climate change and make the best use of satellite images of this region. Our project is part of the NASA Terrestrial Ecology program’s Arctic-Boreal Vulnerability Experiment (ABoVE), a large-scale field study in Alaska and western Canada, whose overall goals are to make use of NASA technology to gain a better understanding of ecosystems at high latitudes, their responses to environmental change, and the effects of those changes.
This took our team—from the University of Maryland Baltimore County, NASA’s Goddard Space Flight Center, and the University of Texas El Paso—north to Utqiaġvik (formerly Barrow) Alaska on the shores of the Arctic Ocean. Utqiaġvik, the northernmost city in the United States, is on the lands of the Iñupiat people. There are no roads to Utqiaġvik, so all of our equipment had to be shipped by air up there.
The goal of our June campaign was to install automated sensors to capture the springtime green-up of the tundra. Even though it is June and after the official first day of summer, spring is just starting in Utqiaġvik. There is ice in the ocean (Figure 1), the snow is just melting off the land, and the tundra is brown. Our instruments include the FLoX (Fluorescence Box) that measures the reflected light and solar induced chlorophyll fluorescence of patches of the tundra (Figures 2, 3 and 4) and the monitoring PAM (MoniPAM) whose probes illuminate small patches of leaves or moss with controlled pulses of light to measure fluorescence and photosynthetic processes at the leaf level (Figure 5). These instruments automatically measure the fluorescence throughout the day to observe the effects of varying light levels and temperatures and through the course of the growing season as the tundra plants grow. The FLoX gives us measurements that are similar to the kinds of data we can get from satellite. Making these measurements on the ground lets us know exactly what we are looking at, which helps better understand what the data mean.
Our instruments are deployed at existing flux tower sites. Flux towers measure water, heat, and carbon dioxide exchange between the ground and the atmosphere as well as provide weather data. The flux towers were about a mile from the nearest road, so all of our equipment had to be backpacked into the sites (Figures 6 and 7). It is a soggy hike in because this time of year the tundra is very wet, since the snow has just melted and the soil is still frozen keeping the water from seeping into the ground (Figure 8).
One site is the Department of Energy Next Generation Ecosystem Experiment (NGEE) Arctic flux tower and here we are measuring a drier tundra site (Figures 2 and 4). The other site is the National Science Foundation’s National Ecological Observatory Network (NEON) flux tower (Figure 3), which is a wetter site. We use the flux data to determine photosynthesis rates to compare with our measurements of fluorescence to develop approaches for relating remotely sensed optical measurements to tundra ecosystem productivity.
The trip wasn’t all work, Petya Campbell and students were able to attend a Nalukataq, the Iñupiat whaling festival, and see the traditional blanket toss that throws the blanket dancer high into the air (Figure 9).
We will return in August at the time of peak growth of the tundra to collect further measurements of fluorescence and productivity to add to the seasonal descriptions of fluorescence from these automated sensors.