Air Pollution Cuts Three Years Off Lifespans In Northern China

There are currently an estimated 4.5 billion people around the world exposed to levels of particulate air pollution that are at least twice what the World Health Organization considers safe. Yet the impact of sustained exposure to pollution on a person’s life expectancy has largely remained a vexingly unanswered question—until now.

A study published Sept. 11 in the Proceedings of the National Academy of Sciences finds that a Chinese policy is unintentionally causing people in northern China to live 3.1 years less than people in the south, due to air pollution concentrations that are 46 percent higher. These findings imply that every additional 10 micrograms per cubic meter of particulate matter pollution reduces life expectancy by 0.6 years. The elevated mortality is entirely due to an increase in cardiorespiratory deaths, indicating that air pollution is the cause of reduced life expectancies to the north.

“These results greatly strengthen the case that long-term exposure to particulates air pollution causes substantial reductions in life expectancy. They indicate that particulates are the greatest current environmental risk to human health, with the impact on life expectancy in many parts of the world similar to the effects of every man, woman and child smoking cigarettes for several decades,” said study co-author Michael Greenstone, director of the Energy Policy Institute at the University of Chicago and the Milton Friedman Professor in Economics, the College and the Harris School. “The histories of the United States, parts of Europe, Japan and a handful of other countries teach us that air pollution can be reduced, but it requires robust policy and enforcement.”

The study exploits China’s Huai River policy, which provided free coal to power boilers for winter heating to people living north of the river and provided almost no resources toward heating south of the river. The policy’s partial provision of heating was implemented because China did not have enough resources to provide free coal nationwide. Additionally, since migration was greatly restricted, people exposed to pollution were generally not able to migrate to less polluted areas. Together, the discrete change in policy at the river’s edge and the migration restrictions provide the basis for a powerful natural experiment that offers an opportunity to isolate the impact of sustained exposure to particulates air pollution from other factors that affect health.

“Unveiling this important information helps build the case for policies that ultimately serve to improve the lives of the Chinese people and the lives of those globally who suffer from high levels of air pollution,” said study co-author Maigeng Zhou, deputy director of the National Center for Chronic and Noncommunicable Disease Control and Prevention in the Chinese Center for Disease Control and Prevention.

Overall, the study provides solutions to several challenges that have plagued previous research. In particular, prior studies rely on research designs that may be unlikely to isolate the causal effects of air pollution; measure the effect of pollution exposure for relatively short periods of time (e.g., weekly or annually), failing to shed light on the effect of sustained exposure; examine settings with much lower pollution concentrations than those currently faced by billions of people in countries, including China and India, leaving questions about their applicability unanswered; measure effects on mortality rates but leave the full loss of life expectancy unanswered.

“The study’s unique design provides solutions to several challenges that have been difficult to solve,” said co-author Maoyong Fan, an associate professor at Ball State University. “The Huai River policy also provides a research design that can be used to explore a variety of other questions about the long-run consequences of exposure to high levels of pollution.”

The study follows on an earlier study, conducted by some of the same researchers, which also utilized the unique Huai River design. Despite using data from two separate time periods, both studies revealed the same basic relationship between pollution and life expectancy. However, the new study’s more recent data covers a population eight times greater than the previous one. It also provides direct evidence on smaller pollution particles that are more often the subject of environmental regulations.

“This new study provides an important opportunity to assess the validity of our previous findings. The striking finding is that both studies produced remarkably similar results, increasing our confidence that we have uncovered the causal relationship between particulates air pollution and life expectancy,” said Avraham Ebenstein, a lecturer in the Department of Environmental Economics and Management at Hebrew University of Jerusalem and an author of both studies.

Since the earlier paper, China has increased its efforts to confront its air pollution challenge. China is switching its primary source of heating from coal-fired boilers to gas-fired or electric units, and it has shut down many polluting plants. The consequence is that particulate air pollution in some of China’s most polluted cities, such as Beijing, has improved significantly.

“Our findings show that these changes will bring about significant health benefits for the Chinese people in the long run,” said co-author Guojun He, an assistant professor at The Hong Kong University of Science and Technology. “If all of China were brought into compliance with its Class I standards for PM10 (40), more than 3.7 billion life-years will be saved.”

Introducing the Air Quality-Life Index

Importantly, the results from this paper can be generalized to quantify the number of years that air pollution reduces lifespans around the globe—not just in China. Specifically, Greenstone and his colleagues at EPIC used the finding that an additional 10 micrograms per cubic meter of PM10 reduces life expectancy by 0.6 years to develop a new pollution index, the Air Quality-Life Index. The index allows users to better understand the impact of air pollution on their lives by calculating how much longer they would live if the pollution in the air they breathe were brought into compliance with national or WHO standards. It also serves as an important complement to the frequently used Air Quality Index, which is a complicated function of air pollution concentrations and does not map directly to long-term human health.

“The AQLI uses the critical data and information gathered from our China research and applies it to every country, allowing the billions of people around the world who are exposed to high air pollution levels to estimate how much longer they would live if they breathed cleaner air,” said Greenstone.

Ancient Tree Reveals Cause Of Spike In Arctic Temperature

A kauri tree preserved in a New Zealand peat swamp for 30,000 years has revealed a new mechanism that may explain how temperatures in the Northern Hemisphere spiked several degrees centigrade in just a few decades during the last global ice age.

Unexpectedly, according to new research led by scientists from UNSW Sydney and published today in Nature Communications, it looks like the origin of this warming may lie half-a-world away, in Antarctica.

Rapid warming spikes of this kind during glacial periods, called Dansgaard-Oeschger events, are well known to climate researchers. They are linked to a phenomenon known as the “bipolar seesaw”, where increasing temperatures in the Arctic happen at the same time as cooling over the Antarctic, and vice versa.

Until now, these divergences in temperature at the opposite ends of the Earth were believed to have been driven by changes in the North Atlantic, causing deep ocean currents, often referred to as the ocean “conveyor belt”, to shut down. This led to warming in the Northern Hemisphere and cooling in the south.

But the study, which examines a specific Dansgaard-Oeschger event that occurred around 30,000 years ago, suggests Antarctica plays a role too.

The paper describes how the researchers used a detailed sequence of radiocarbon dates from an ancient New Zealand kauri tree to precisely align ice, marine and sediment records across a period of greatly changing climate.

“Intriguingly, we found that the spike in temperature preserved in the Greenland ice core corresponded with a 400-year-long surface cooling period in the Southern Ocean and a major retreat of Antarctic ice,” said lead author and UNSW scientist Professor Chris Turney.

“As we looked more closely for the cause of this opposite response we found that there were no changes to the global ocean circulation during the Antarctic cooling event that could explain the warming in the North Atlantic. There had to be another cause.”

A clue to what might be going on if the oceans weren’t involved appeared in lake sediments from the Atherton Tableland, Queensland. The sediments showed a simultaneous collapse of rain-bearing trade winds over tropical northeast Australia.

It was a curious change, so the researchers turned to climate models to see if these climate events might somehow be linked.

They started by modelling the release of large volumes of freshwater into the Southern Ocean, exactly as would happen with rapid ice retreat around the Antarctic.

Consistent with the data, they found that there was cooling in the Southern Ocean but no change in the global ocean circulation.

They also found that the freshwater pulse caused rapid warming in the tropical Pacific. This in turn led to changes to the atmospheric circulation that went on to trigger sharply higher temperatures over the North Atlantic and the collapse of rain-bearing winds over tropical Australia.

Essentially, the model showed the formation of a 20,000 km long “atmospheric bridge” that linked melting ice in Antarctica to rapid atmospheric warming in the North Atlantic.

“Our study shows just how important Antarctica’s ice is to the climate of the rest of the world and reveals how rapid melting of the ice here can affect us all. This is something we need to be acutely aware of in a warming world,” Professor Turney said.

It also showed how deeply the climate was linked across great distances said fellow author and climate modeller from the University of Tasmania, Dr Steven Phipps.

“Our research has revealed yet another remarkable example of the interconnections that are so much a part of our climate system,” Dr Phipps said.

“By combining past records of past events with climate modelling, we see how a change in one region can have major climatic impacts at the opposite ends of the Earth.”

Climate Lessons from California

Stanford, Calif. — California faces serious risks from climate change. Some are already being felt, like the severe heat this summer and recent episodes of extremely low snowpack in the mountains, which the state depends on for much of its water. Those are among the key messages in a new climate science report now under review in the White House. The good news is that California has been working hard to catch up with the climate change that has already happened, and to get ahead of what is still to come.

The past five years have painted a clear picture of what is in store for California, according to numerous scientific studies that underpin the new assessment: Rising temperatures will bring more frequent and severe hot spells, intensifying heat stress; more precipitation will fall as rain rather than snow, increasing storm water runoff; snow that does fall will melt earlier in the year, leaving less for the warm, dry season; and more moisture will be drawn out of soils and vegetation, increasing stress on crops and ecosystems. All of this will lead to more frequent and severe water deficits, punctuated by wet periods with increasing flood risk.

Add rising sea levels, more extensive flooding during storm surges and the acidification of the coastal ocean, and California faces a phalanx of climate-related dangers to human health, agriculture, industry, economic productivity, and terrestrial and marine ecosystems.

As the new report makes clear, California is not the only state facing such risks. However, California has been particularly ambitious in its efforts to reduce greenhouse gas emissions and to build resilience in the face of climate change uncertainty. The state’s hard work over the past two decades has yielded several lessons for cities, states and countries that face intensifying climate-related stresses.

Continue reading the main story

Can the author of this article or the commenters supporting the article please provide citations to peer reviewed research articles showing…

The first is that it is possible to reduce greenhouse gas emissions while also enjoying a thriving economy. Since 2001, California’s economy has grown, while its greenhouse gas emissions have fallen. The state recently renewed its landmark cap-and-trade program, which limits total statewide emissions while allowing a marketplace to determine the price polluters must pay. The goal is to reduce greenhouse gas emissions to 40 percent below 1990 levels by 2030.

Sign Up for the Opinion Today Newsletter

Every weekday, get thought-provoking commentary from Op-Ed columnists, the Times editorial board and contributing writers from around the world.

This goal is ambitious and provides a powerful example for those seeking to simultaneously create jobs, stimulate innovation and reduce emissions. But the reduction still won’t be enough to stabilize the climate. That will require bringing global greenhouse gas emissions effectively down to zero, so still greater reductions will be needed to meet the United Nations targets.

The second lesson is that adapting to climate change requires understanding how the climate is changing. California has mandated regular scientific assessments of historical changes and possible future trajectories. This scientific process has yielded deep insights about the nature of what’s happening in California. And those insights have provided a foundation for decision making, like incorporating trends in temperature, snowpack and runoff into managing the state’s crucial groundwater reserves, and the planning and operation of its infrastructure.

In contrast to the obfuscation and denial about climate science by the Trump administration and much of the Republican congressional caucus, California has invested heavily in understanding climate change and in finding “climate-smart” solutions that can create jobs, improve energy efficiency and decrease emissions, while also building resilience to the climate change that has happened and to a range of possible future outcomes.

The third lesson is that, despite all of the progress, we need to work harder to ensure equity and justice for all residents in the face of a changing climate. It is well documented that poverty increases vulnerability to climate-related stresses. For example, during severe heat events, those who cannot afford air-conditioning or who must labor outdoors are considerably more vulnerable than those who have access to indoor air-conditioning. Likewise, during the recent drought, thousands of Californians suffered without running water for months, highlighting the severe inequality and associated vulnerability in the state. The government has sharpened its focus on ensuring that revenues from the cap-and-trade program benefit disadvantaged communities, but environmental justice remains a critical concern.

The United States recently officially informed the United Nations that it plans to withdraw from the Paris climate agreement, as President Trump vowed to do. His rejection of climate science and the international community’s efforts to address the intensifying risks of global warming stands in stark contrast to the extensive scientific evidence that climate change is now being felt across America.

In response to President Trump’s abdication of international climate leadership, many states, cities and corporations are searching for ways to fill the void. California offers lessons of what has worked, and what is still left to be done.

Collaboration meets innovation in transformative Stanford environmental projects

What do bird flight mechanics and renewable energy technology have to do with each other? By combining ongoing Stanford research on both, researchers hope to cut down on the number of birds and bats that collide with wind turbines’ spinning blades.

A Stanford grant combine recent research on bird visual flight control and vertical-turbine technology to lessen wind turbine effects on bird populations. (Image credit: MarkoGrothe/Pixabay)

This and nine other interdisciplinary projects focused on developing environmental solutions will receive funding from the Stanford Woods Institute for the Environment’s 2017 Environmental Venture Projects (EVP) and Realizing Environmental Innovation Program (REIP) grants. Teams from across campus will collaborate on research aimed at developing innovations ranging from coral-safe sunscreen to a smartphone app that motivates pro-environmental behavior change.

“This year’s awardees represent an inspiring range of transformative approaches to cross-cutting environmental issues, geographies and disciplines,” said Nicole Ardoin, co-chair of the selection committee and an associate professor with a joint appointment in the Graduate School of Education and the Stanford Woods Institute.

Since the inceptions of the EVP program in 2004 and the REIP program in 2015, the Stanford Woods Institute has awarded more than $13 million in grants to 84 research teams representing all seven of Stanford’s academic schools and 47 departments. Working in more than 28 countries, these projects have garnered more than $51 million in follow-on funding, enabling researchers to build on and advance their initial findings.

Environmental Venture Projects

EVP grants support early-stage, high-risk research projects that identify transformative solutions. The projects selected for 2017 will each receive grants ranging from $29,909 to $200,000 over the next two years (lead principal investigators in bold):

Coral-Safe Sunscreen: William Mitch (Civil and Environmental Engineering) and John Pringle (Genetics). Snorkeling visits to coral reefs increase demand for preserving them. However, use of sunscreens by snorkelers and others has been associated with severe declines in coral reefs. Meanwhile, other sunscreens are marketed explicitly as “coral-safe.” There is little evidence to justify either of these claims. This project will characterize the chemical and biological mechanisms by which sunscreens may harm corals in order to guide the development and marketing of effective sunscreens that are not toxic to corals.

Menu Messaging to Reduce Meat Consumption: Greg Walton (Psychology), Neil Malhotra (Graduate School of Business) and Thomas Robinson (Pediatrics). How can we curb the current norm of environmentally unsustainable levels of meat consumption in developed countries? Research has shown that learning about the decline in meat consumption can lead people to order fewer dishes containing meat. Cooperating with chain restaurants, this project will test the effectiveness of incorporating norm-based messaging into restaurant menus and web-based meal ordering platforms for promoting consumption of plant-based dishes among large numbers of people.

Water and Energy Connections: Ram Rajagopal (Civil and Environmental Engineering) and Bruce Cain (Political Science). Both the energy and water sectors face increasing demands. As smart water meters replace analog meters, what can water managers and end-users learn from energy smart metering? What demand responses in one system correlate to responses in the other? What management and behavioral strategies can reduce use of energy and water? Using data analytics, social science, modeling and policy expertise, this project will apply electricity smart meter data methods to water smart meter data; explore relationships between household water and electricity consumption; and pilot a water/energy feedback experiment.

Respiratory Disease Solution: Catherine Gorle (Civil and Environmental Engineering) and Steve Luby (Infectious Diseases and Geographic Medicine). Respiratory diseases are a leading cause of child death globally, killing approximately 1.3 million children per year. Poor indoor air quality is a major cause of these infections and there are indications that improving ventilation could reduce respiratory illnesses. This project will develop and validate a computational framework for predicting ventilation rates in a variety of low-income household layouts and ventilation designs. The framework will provide essential information for analyzing results of an initial randomized control trial to evaluate the impact of ventilation interventions in homes in Bangladesh, and it will support the formulation of global ventilation recommendations.

Plant Life Performance: Helen Paris (Theater and Performance Studies), Leslie Hill (Theater and Performance Studies) and Seung Yon Rhee (Plant Biology, Carnegie Institution for Science). This project will culminate in the composition of a site-specific performance for public gardens and landscapes that will educate audiences about conservation, and examine the environmental impacts of our relationship with plants. The researchers will highlight plant life particularly vulnerable to extinction, and create a performance piece in which audiences are led on an immersive and intimate journey of music, plant sounds and lyrical text exploring the world of plants and the role they play in our lives.

Natural Flood Mitigation:
 Jack Baker (Civil and Environmental Engineering) and Gretchen Daily (Biology). Deforestation and unplanned development increase the risk of flood damage to lives and property, particularly in dense urban and coastal areas. Decision-makers around the world want to incorporate into planning information about the conditions in which natural ecosystems and land management can help mitigate flood risks. However, there is currently no tool to perform a relevant, rapid assessment. This project will couple an existing global flood risk model and a damage estimation model to assess ecosystem flood mitigation service and value.

Environmental Behavior Change App: James Landay (Computer Science) and Alia Crum (Psychology). Solutions to global environment and health challenges, such as obesity and climate change, will require significant behavior changes. This project aims to motivate pro-environmental and healthy behavior change by visualizing behavioral goals and their progress on smartphones. Researchers will study the effect of an app they developed that combines sustainability and fitness behavior tracking with multi-chapter narratives. The app’s continually visible display will push users to achieve their sustainability and fitness goals.

Tobacco Labeling Assessment: Judith Prochaska (School of Medicine) and Eric Lambin (School of Earth, Energy & Environmental Sciences). Despite declines in the prevalence of cigarette smoking in the U.S., the number of smokers has remained relatively stable at around 40 million due in part to population growth. Another factor: Many tobacco companies attempt to decrease negative perceptions by promoting their corporate social responsibility and emphasizing that their brands are “environmentally friendly.” This project will use a randomized experimental design to examine the effect of pro-environment product labeling on adults’ tobacco-related perceptions and to identify effective public health counterstrategies.

Realizing Environmental Innovation Program

REIP is designed to help projects that demonstrate promising solution approaches move from the discovery phase of research to the next stages of solution validation and translation. The projects selected for 2017 will each receive $200,000 grants over the next two years (lead principal investigators in bold):

Bird-Safe Wind Turbines: David Lentink (Mechanical Engineering) and John Dabiri (Civil and Environmental Engineering). Despite the potential contribution of wind energy to emissions reductions, wind turbines have significant ecological impacts through the killing of birds and bats that collide with spinning blades. In this way, expansion of wind energy parks around the globe will have a proportional increasing impact on the ecosystem. Many wind energy parks overlap with important bird corridors recognized by the Audubon Society. This project will address this sustainability roadblock by combining recent Stanford research on bird visual flight control and vertical-turbine technology.

Open Space Management Model: Nicole Ardoin (Graduate School of Education) and Deborah Gordon (Biology). This project will evaluate a solutions-focused open space management practicum course the researchers have piloted at Stanford. The practicum models how universities and land trusts might create on-the-ground conservation impact by engaging students and land managers in research to produce conservation solutions. Once evaluated at a local and regional scale, the researchers will expand the work nationally through networked partners, such as the Land Trust Alliance. They aim to develop a digital interactive atlas and other tools; expand to other conservation agencies; enhance research opportunities for students; expand, apply and share evaluative research; and develop model projects and tools to share nationally.

Manmade and natural earthquakes share shaking potential

New research shows manmade and naturally occurring earthquakes in the central U.S. share the same characteristics, information that will help scientists predict and mitigate damage from future earthquakes.

A magnitude 5.6 earthquake likely induced by injection into deep disposal wells in the Wilzetta North field caused house damage in central Oklahoma on Nov. 6, 2011. Research conducted by Stanford scientists shows human-induced and naturally occurring earthquakes in the central U.S. share the same shaking potential and can thus cause similar damage. Photo credit: Brian Sherrod, USGS
Whether an earthquake occurs naturally or as a result of unconventional oil and gas recovery, the destructive power is the same, according to a new study appearing in Science Advances Aug. 2. The research concludes that human-induced and naturally occurring earthquakes in the central U.S. share the same shaking potential and can thus cause similar damage.

The finding contradicts previous observations suggesting that induced earthquakes exhibit weaker shaking than natural ones. The work could help scientists make predictions about future earthquakes and mitigate their potential damage.

“People have been debating the strength of induced earthquakes for decades – our study resolves this question,” said co-author William Ellsworth, a professor in the Geophysics Department at Stanford’s School of Earth, Energy & Environmental Sciences and co-director of the Stanford Center for Induced and Triggered Seismicity (SCITS). “Now we can begin to reduce our uncertainty about how hard induced earthquakes shake the ground, and that should lead to more accurate estimates of the risks these earthquakes pose to society going forward.”

Induced quakes

Earthquakes in the central U.S. have increased over the past 10 years due to the expansion of unconventional oil and gas operations that discard wastewater by injecting it into the ground. About 3 million people in Oklahoma and southern Kansas live with an increased risk of experiencing induced earthquakes.

“The stress that is released by the earthquakes is there already – by injecting water, you’re just speeding up the process,” said co-author Gregory Beroza, the Wayne Loel Professor in geophysics at Stanford Earth and co-director of SCITS. “This research sort of simplifies things, and shows that we can use our understanding of all earthquakes for more effective mitigation.”

Oklahoma experienced its largest seismic event in 2016 when three large earthquakes measuring greater than magnitude 5.0 caused significant damage to the area. Since the beginning of 2017, the number of earthquakes magnitude 3.0 and greater has fallen, according to the Oklahoma Geological Survey. That drop is partly due to new regulations to limit wastewater injection that came out of research into induced earthquakes.

The main figure shows the cumulative number of magnitude 3 or greater earthquakes in the central U.S. over time, and the inset shows the number per year. The dramatic rise over the last decade is thought to be caused by fluid injection related to unconventional oil and gas operations. Research has led to new regulations and decreased the potential risk in Oklahoma in 2016 and 2017. Image credit: Justin Rubinstein, USGS

Stress drop

To test the destructive power of an earthquake, researchers measured the force driving tectonic plates to slip, known as stress drop – measured by the difference between a fault’s stress before and after an earthquake. The team analyzed seismic data from 39 manmade and natural earthquakes ranging from magnitude 3.3 to 5.8 in the central U.S. and eastern North America. After accounting for factors such as the type of fault slip and earthquake depth, results show the stress drops of induced and natural earthquakes in the central U.S. share the same characteristics.

A second finding of the research shows that most earthquakes in the eastern U.S. and Canada exhibit stronger shaking potential because they occur on what’s known as reverse faults. These types of earthquakes are typically associated with mountain building and tend to exhibit more shaking than those that occur in the central U.S. and California. Although the risk for naturally occurring earthquakes is low, the large populations and fragile infrastructure in this region make it vulnerable when earthquakes do occur.

The team also analyzed how deep the earthquakes occur underground and concluded that as quakes occur deeper, the rocks become stronger and the stress drop, or force behind the earthquakes, becomes more powerful.

“Both of these conclusions give us new predictive tools to be able to forecast what the ground motions might be in future earthquakes,” Ellsworth said. “The depth of the quake is also going to be important, and that needs to be considered as people begin to revise these ground-motion models that describe how strong the shaking will be.”

The scientists said that the types of rocks being exploited by unconventional oil and gas recovery in the U.S. and Canada can be found all over the world, making the results of this study widely applicable.

“As we can learn better practices, we can help ensure that the hazards induced earthquakes pose can be reduced in other parts of the world as well,” Ellsworth said.

Additional authors include lead author Yihe Huang, a former postdoctoral researcher at Stanford and now an assistant professor at the University of Michigan. The study was supported by the Stanford Center for Induced and Triggered Seismicity.

New tool developed by Stanford engineers helps parched regions plan how to replenish aquifers

Stanford engineers have developed a software tool called AquaCharge that enables planners to devise the most cost-effective ways to reuse precious water. Image credit: iStock/tuachanwatthana
The federal government reports that 40 states expect water shortages by 2024 and water worries already plague some cities across the United States. Underground aquifers that were over-tapped for years now cry out to be replenished. The problem is that the two main strategies for increasing water supplies – collecting stormwater runoff and recycling treated wastewater – are usually separate processes that can create costly and underused infrastructure.

Now two Stanford environmental engineers have developed a computational planning tool called AquaCharge that helps urban water utilities look at their local circumstances and understand how they could combine these two water supply strategies into an integrated, efficient and cost-effective system that replenishes aquifers.

This planning tool and hybrid approach are so innovative that the American Society of Civil Engineers recently honored Jonathan Bradshaw, a graduate student in civil and environmental engineering, and Richard Luthy, a professor of civil and environmental engineering, for developing AquaCharge.

“The ideas of recycling waste water and capturing stormwater are not new,” said Luthy. “What’s new here is to think about how to combine what had been separate systems into a single approach to recharge groundwater.”

Cost vs. need

Neither strategy for increasing water supplies is without drawbacks. A number of utilities in California collect stormwater, such as the rainfall that pours down mountainsides during the wet season, and channel it into big “spreading basins,” which are essentially leaky ponds that are porous enough for water to percolate back down to an aquifer. Aquifers serve as natural storage banks, holding water for future use instead of letting it wash out to the ocean.

Although this approach is effective, spreading basins require a large amount of land that is often underutilized. That’s because engineers typically designed the basins to be big enough to capture and process large volumes of water during the stormy season. As a consequence of this design, the basins remain largely idle through the dry months. By some estimates, Los Angeles’ spreading basins on average percolate only 12 percent of their theoretical annual capacity. Not all districts have access to land that can lay idle so much of the year.

Wastewater recycling poses a different set of challenges. Some utilities treat wastewater to the point that it can be used safely for agricultural irrigation or certain industrial purposes, such as circulating through the cooling towers of a power plant. Such uses reduce the burden on aquifers or other water sources.

However, regulations require that this recycled water be conveyed in a pipeline separate from drinking water pipes. Although many cities have a large potential to produce recycled water, the high cost of such separate piping systems means that only a small fraction of this potential actually gets developed. And so most treated wastewater flows back into the sea, or into rivers and streams.

A hybrid system

With these trade-offs, regions may decide that the land requirements or piping costs drive them toward one or the other system. However, other communities have developed hybrid approaches combining the two strategies.

Orange County, California, which has become a leader in groundwater replenishment, purifies its wastewater so that it is clean enough to drink – then pumps this highly purified recycled water into spreading basins to recharge the region’s underground aquifer. This is analogous to stormwater capture in the sense that the water percolates back into underground storage banks, except that the water source is purified wastewater rather than rainfall or snowmelt.

Inspired by water reuse leaders like Orange County, the Stanford researchers created AquaCharge to assist other local authorities in comparing the tradeoffs between different designs in order to find the most cost-effective system in their region.

The software looks at factors such as the availability of spreading basins and stormwater supplies, the potential to produce recycled water and options for installing recycled water pipelines.

“Our method not only allows you to think about a new kind of hybrid water replenishment system,” Bradshaw said. “It also helps determine what sort of system will meet a city’s goals at the lowest cost.”

Luthy said AquaCharge could greatly improve the use and reuse of water. California, for instance, currently recycles about 15 percent of its available treated wastewater effluent. State water planners would like to double or triple that amount by 2030.

By allowing communities to make complex calculations that reveal costs and benefits of reuse strategies, Luthy says, “AquaCharge could help the state meet that goal.”

Supervolcanoes: A key to America’s electric future?

Stanford researchers show that lake sediments preserved within ancient supervolcanoes can host large lithium-rich clay deposits. A domestic source of lithium would help meet the rising demand for this valuable metal, which is critical for modern technology.

Researchers detail a new method for locating lithium in lake deposits from ancient supervolcanoes, which appear as large holes in the ground that often fill with water to form a lake, such as Crater Lake in Oregon, pictured here. A domestic source of lithium would help meet the rising demand for this valuable metal, which is critical for modern technology. Photo Credit: Lindsay Snow, Shutterstock

Most of the lithium used to make the lithium-ion batteries that power modern electronics comes from Australia and Chile. But Stanford scientists say there are large deposits in sources right here in America: supervolcanoes.

In a study published today in Nature Communications, scientists detail a new method for locating lithium in supervolcanic lake deposits. The findings represent an important step toward diversifying the supply of this valuable silvery-white metal, since lithium is an energy-critical strategic resource, said study co-author Gail Mahood, a professor of geological sciences at Stanford’s School of Earth, Energy & Environmental Sciences.

“We’re going to have to use electric vehicles and large storage batteries to decrease our carbon footprint,” Mahood said. “It’s important to identify lithium resources in the U.S. so that our supply does not rely on single companies or countries in a way that makes us subject to economic or political manipulation.”

Supervolcanoes can produce massive eruptions of hundreds to thousands of cubic kilometers of magma – up to 10,000 times more than a typical eruption from a Hawaiian volcano. They also produce vast quantities of pumice and volcanic ash that are spread over wide areas. They appear as huge holes in the ground, known as calderas, rather than the cone-like shape typically associated with volcanoes because the enormous loss of magma causes the roof of the chamber to collapse following eruption.

The resulting hole often fills with water to form a lake – Oregon’s Crater Lake is a prime example. Over tens of thousands of years, rainfall and hot springs leach out lithium from the volcanic deposits. The lithium accumulates, along with sediments, in the caldera lake, where it becomes concentrated in a clay called hectorite.

Exploring supervolcanoes for lithium would diversify its global supply. Major lithium deposits are currently mined from brine deposits in high-altitude salt flats in Chile and pegmatite deposits in Australia. The supervolcanoes pose little risk of eruption because they are ancient.

“The caldera is the ideal depositional basin for all this lithium,” said lead author Thomas Benson, a recent PhD graduate at Stanford Earth, who began working on the study in 2012.

Since its discovery in the 1800s, lithium has largely been used in psychiatric treatments and nuclear weapons. Beginning in the 2000s, lithium became the major component of lithium-ion batteries, which today provide portable power for everything from cellphones and laptops to electric cars. Volvo Cars recently announced its commitment to only produce new models of its vehicles as hybrids or battery-powered options beginning in 2019, a sign that demand for lithium-ion batteries will continue to increase.

“We’ve had a gold rush, so we know how, why and where gold occurs, but we never had a lithium rush,” Benson said. “The demand for lithium has outpaced the scientific understanding of the resource, so it’s essential for the fundamental science behind these resources to catch up.”

Working backward

To identify which supervolcanoes offer the best sources of lithium, researchers measured the original concentration of lithium in the magma. Because lithium is a volatile element that easily shifts from solid to liquid to vapor, it is very difficult to measure directly and original concentrations are poorly known.

So, the researchers analyzed tiny bits of magma trapped in crystals during growth within the magma chamber. These “melt inclusions,” completely encapsulated within the crystals, survive the supereruption and remain intact throughout the weathering process. As such, melt inclusions record the original concentrations of lithium and other elements in the magma. Researchers sliced through the host crystals to expose these preserved magma blebs, which are 10 to 100 microns in diameter, then analyzed them with the Sensitive High Resolution Ion Microprobe in the SHRIMP-RG Laboratory at Stanford Earth.

“Understanding how lithium is transported in magmas and what causes a volcanic center to become enriched in lithium has never really systematically been done before,” Benson said.

Recent PhD graduate Thomas Benson examines a debris flow atop caldera lake sediments in a Mid-Miocene caldera near the Nevada-Oregon border. Photo Credit: Maxine Luckett, 2013

The team analyzed samples from a range of tectonic settings, including the Kings Valley deposit in the McDermitt volcanic field located on the Nevada-Oregon border, which erupted 16.5 to 15.5 million years ago and is known to be rich in lithium. They compared results from this volcanic center with samples from the High Rock caldera complex in Nevada, Sierra la Primavera in Mexico, Pantelleria in the Strait of Sicily, Yellowstone in Wyoming and Hideaway Park in Colorado, and determined that lithium concentrations varied widely as a function of the tectonic setting of the supervolcano.

“If you have a lot of magma erupting, it doesn’t have to have as much lithium in it to produce something that is worthy of economic interest as we previously thought,” Mahood said. “You don’t need extraordinarily high concentrations of lithium in the magma to form lithium deposits and reserves.”
Improving identification

In addition to exploring for lithium, the researchers analyzed other trace elements to determine their correlations with lithium concentrations. As a result, they discovered a previously unknown correlation that will now enable geologists to identify candidate supervolcanoes for lithium deposits in a much easier way than measuring lithium directly in melt inclusions. The trace elements can be used as a proxy for original lithium concentration. For example, greater abundance of easily analyzed rubidium in the bulk deposits indicates more lithium, whereas high concentrations of zirconium indicate less lithium.

“We can essentially use the zirconium content to determine the lithium content within about 100 parts per million,” Benson said. “Now that we have a way to easily find more of these lithium deposits, it shows that this fundamental geological work can help solve societal problems – that’s really exciting.”

Co-authors of the paper, “Lithium enrichment in intracontinental rhyolite magmas leads to Li deposits in caldera basins,” include Matthew Coble, a research and development scientist and engineer at Stanford University, and James Rytuba of the U.S. Geological Survey. The research was partially supported by a U.S. Department of Defense NDSEG Fellowship.

Mobile-Phone Data Helps Researchers Study Exposure To Urban Pollution

What’s the best way to measure human exposure to urban pollution? Typically, cities do so by studying air-quality levels in fixed places. New York City, for example, has an extensive monitoring network that measures air quality in 155 locations.

But now a study led by MIT researchers, focused on New York City, suggests that using mobile-phone data to track people’s movement provides an even deeper picture of exposure to pollution in urban settings.

The study, based on data from 2013, broke New York City into 71 districts and found that in 68 of them, exposure levels to particulate matter (PM) were significantly different when the daily movement of 8.5 million people was accounted for.

Specifically, the flow of people into parts of midtown Manhattan, and some parts of Brooklyn and Queens close to Manhattan, appeared to increase aggregate exposure to PM in those areas. Meanwhile, the daytime movement of people away from Staten Island actually lowered overall exposure levels in that borough.

“The traditional way to look at pollution is to have a few measurement stations and use those to look at pollution levels,” says Carlo Ratti, a professor of the practice in MIT’s Department of Urban Studies and Planning, and director of MIT’s Senseable City Lab, where the study was conducted. “But that’s sensitive to where the [measuring] stations are. If you want to quantify exposure, you also need to know where people are.”

The researchers believe the method in the study can be applied broadly and create new levels of detail in an important realm of urban and environmental analysis.

“Up to now, much of our understanding of the impact of air pollution on population health has been based on the relationship between air quality and mortality and/or morbidity rates, in a population which is assumed to be at their home location all the time,” says Marguerite Nyhan, a researcher at Harvard’s T.H. Chan School of Public Health, who led the study as a postdoctoral researcher at the Senseable City Lab. “Accounting for the movements of people will improve our understanding of this relationship. The findings will be important for future population health assessments.”

The paper, “Exposure Track: The Impact of Mobile-Device-Based Mobility Patterns on Quantifying Population Exposure to Air Pollution,” is published this month in the journal Environmental Science & Technology.

The co-authors are Nyhan; Ratti; Rex Britter, a visiting scientist at the Senseable City Lab, who helped direct the project with Ratti; Sebastian Grauwin, a former researcher at the Senseable City Lab; Bruce Misstear and  Aonghus McNabola, both engineering professors at Trinity College Dublin; Francine Laden, a professor in Harvard’s T.C. Chen School of Public Health; and Steven R.H. Barrett, the Leonardo-Finmeccanica Associate Professor in MIT’s Department of Aeronautics and Astronautics. Nyhan is the corresponding author.

To conduct the study, the researchers examined 121 days of data from April through July 2013, using many types of wireless devices from a variety of providers, and blending the phone data with pollution information from the New York City Community Air Survey.

The result, Ratti notes, is effectively “two different maps” representing exposure to PM, one showing the exposure that a static, home-based population would have, and the other showing the actual exposure levels given the dynamics of urban mobility.

By analyzing the issue in this form, the researchers believe they have demonstrated a new way for city leaders, health officials, and urban planners to obtain data on pollution levels and analyze their policy options.

“It becomes an interesting tool if you are a mayor and you want to take action,” Ratti says. “Your goal is to minimize exposure. And exposure is a key determinant of human health.”

The study, the researchers suggest, also underscores the significance of analyzing transportation systems in cities. After all, while some of the PM pollution may come from fixed industrial sources, some of it also comes from automobiles. Studies like this one could help planners identify key locations for low emissions zones, congestion charging, and other tools cities have begun using in an attempt to reduce aggregate exposure among people.

“You’ve got this interplay between moving sources of pollution and the movement of people,” Ratti observes.

The technology firm Ericsson provided data for use in the study and provides support for the Senseable City Lab.

Stanford Engineers Develop A Plastic Clothing Material That Cools The Skin

Stanford engineers have developed a low-cost, plastic-based textile that, if woven into clothing, could cool your body far more efficiently than is possible with the natural or synthetic fabrics in clothes we wear today.

Describing their work in Science, the researchers suggest that this new family of fabrics could become the basis for garments that keep people cool in hot climates without air conditioning.

“If you can cool the person rather than the building where they work or live, that will save energy,” said Yi Cui, an associate professor of materials science and engineering at Stanford and of photon science at SLAC National Accelerator Laboratory.

This new material works by allowing the body to discharge heat in two ways that would make the wearer feel nearly 4 degrees Fahrenheit cooler than if they wore cotton clothing.

The material cools by letting perspiration evaporate through the material, something ordinary fabrics already do. But the Stanford material provides a second, revolutionary cooling mechanism: allowing heat that the body emits as infrared radiation to pass through the plastic textile.

All objects, including our bodies, throw off heat in the form of infrared radiation, an invisible and benign wavelength of light. Blankets warm us by trapping infrared heat emissions close to the body. This thermal radiation escaping from our bodies is what makes us visible in the dark through night-vision goggles.

“Forty to 60 percent of our body heat is dissipated as infrared radiation when we are sitting in an office,” said Shanhui Fan, a professor of electrical engineering who specializes in photonics, which is the study of visible and invisible light. “But until now there has been little or no research on designing the thermal radiation characteristics of textiles.”

Super-powered kitchen wrap

To develop their cooling textile, the Stanford researchers blended nanotechnology, photonics and chemistry to give polyethylene – the clear, clingy plastic we use as kitchen wrap – a number of characteristics desirable in clothing material: It allows thermal radiation, air and water vapor to pass right through, and it is opaque to visible light.

The easiest attribute was allowing infrared radiation to pass through the material, because this is a characteristic of ordinary polyethylene food wrap. Of course, kitchen plastic is impervious to water and is see-through as well, rendering it useless as clothing.

The Stanford researchers tackled these deficiencies one at a time.

First, they found a variant of polyethylene commonly used in battery making that has a specific nanostructure that is opaque to visible light yet is transparent to infrared radiation, which could let body heat escape. This provided a base material that was opaque to visible light for the sake of modesty but thermally transparent for purposes of energy efficiency.

They then modified the industrial polyethylene by treating it with benign chemicals to enable water vapor molecules to evaporate through nanopores in the plastic, said postdoctoral scholar and team member Po-Chun Hsu, allowing the plastic to breathe like a natural fiber.

Making clothes

That success gave the researchers a single-sheet material that met their three basic criteria for a cooling fabric. To make this thin material more fabric-like, they created a three-ply version: two sheets of treated polyethylene separated by a cotton mesh for strength and thickness.

To test the cooling potential of their three-ply construct versus a cotton fabric of comparable thickness, they placed a small swatch of each material on a surface that was as warm as bare skin and measured how much heat each material trapped.

“Wearing anything traps some heat and makes the skin warmer,” Fan said. “If dissipating thermal radiation were our only concern, then it would be best to wear nothing.”

The comparison showed that the cotton fabric made the skin surface 3.6 F warmer than their cooling textile. The researchers said this difference means that a person dressed in their new material might feel less inclined to turn on a fan or air conditioner.

The researchers are continuing their work on several fronts, including adding more colors, textures and cloth-like characteristics to their material. Adapting a material already mass produced for the battery industry could make it easier to create products.

“If you want to make a textile, you have to be able to make huge volumes inexpensively,” Cui said.

Fan believes that this research opens up new avenues of inquiry to cool or heat things, passively, without the use of outside energy, by tuning materials to dissipate or trap infrared radiation.

“In hindsight, some of what we’ve done looks very simple, but it’s because few have really been looking at engineering the radiation characteristics of textiles,” he said.

A First For Direct-Drive Fusion

Scientists at the University of Rochester have taken a significant step forward in laser fusion research.

Experiments using the OMEGA laser at the University’s Laboratory of Laser Energetics (LLE) have created the conditions capable of producing a fusion yield that’s five times higher than the current record laser-fusion energy yield, as long as the relative conditions produced at LLE are reproduced and scaled up at the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory in California.

The findings are the result of multiple experiments conducted by LLE scientists Sean Regan, Valeri Goncharov, and collaborators, whose paper was published in Physical Review Letters. Arijit Bose, a doctoral student in physics at Rochester working with Riccardo Betti, a professor of engineering and physics, interpreted those findings in a paper published as Rapid Communications in the journal Physical Review E (R).

Bose reports that the conditions at LLE would produce over 100 kilojoules (kJ) of fusion energy if replicated on the NIF. While that may seem like a tiny flicker in the world’s ever-expanding demand for energy, the new work represents an important advance in a long-standing national research initiative to develop fusion as an energy source. The 100 kJ is the energy output of a 100-watt light for about 20 minutes, but in a fusion experiment at NIF, that energy would be released in less than a billionth of a second and enough to bring the fuel a step closer to the ignition conditions.

“We have compressed thermonuclear fuel to about half the pressure required to ignite it. This is the result of a team effort involving many LLE scientists and engineers,” said Regan, the leader of the LLE experimental group.

If ignited, thermonuclear fuel would unleash copious amounts of fusion energy, much greater than the input energy to the fuel.

“In laser fusion, an ignited target is like a miniature star of about a 10th of a millimeter, which produces the energy equivalent of a few gallons of gasoline over a fraction of a billionth of a second. We are not there yet, but we are making progress” said Betti, the Robert L. McCrory Professor at the Laboratory for Laser Energetics.

In terms of proximity to the conditions required to ignite the fuel, the two recent LLE papers report that OMEGA experiments match the current NIF record when extrapolated to NIF energies. Igniting a target is the main goal of the laser fusion effort in the United States.

As part of their work, researchers carefully targeted the LLE’s 60 laser beams to strike a millimeter-sized pellet of fuel–an approach known as the direct-drive method of inertial confinement fusion (ICF).

The results indicate that the direct-drive approach used by LLE, home to the most prolific laser in the world (in terms of number of experiments, publications, and diversity of users) is a promising path to fusion and a viable alternative over other methods, including that used at NIF. There, researchers are working to achieve fusion by using 192 laser beams in an approach known as indirect-drive, in which the laser light is first converted into x-rays in a gold enclosure called a hohlraum. While not yet achieving ignition, scientists at LLNL and colleagues in the ICF Community have made significant progress in understanding the physics and developing innovative approaches to indirect drive fusion.

“We’ve shown that the direct-drive method, is on par with other work being done in advancing nuclear fusion research,” said Bose.

“Arijit’s work is very thorough and convincing. While much work remains to be done, this result shows significant progress in the direct-drive approach, “says Betti.

Research at both LLE and NIF is based on inertial confinement, in which nuclear fusion reactions take place by heating and compressing–or imploding–a target containing a fuel made of deuterium and tritium (DT). The objective is to have the atoms collide with enough energy that the nuclei fuse to form a helium nuclei and a free neutron, releasing significant energy in the process.

In both methods being explored at LLE and NIF, a major challenge is creating a self-sustaining burn that would ignite all the fuel in the target shells. As a result, it’s important that enough heat is created when helium nuclei are initially formed to keep the process going. The helium nuclei are called alpha particles, and the heat produced is referred to as alpha heating.

E. Michael Campbell, deputy director of LLE and part of the research team, said the results were made possible because of a number of improvements in the direct-method approach.

One involved the aiming of the 60 laser beams, which now strike the target more uniformly.

“It’s like squeezing a balloon with your hands; there are always parts that pop out where your hands aren’t,” said Campbell. “If it were possible to squeeze a balloon from every spot on the surface, there would be a great deal more pressure inside. And that’s what happens when the lasers strike a target more symmetrically.”

“If we can improve the uniformity of the way we compress our targets, we will likely get very close to the conditions that would extrapolate to ignition on NIF. This is what we will be focusing on in the near future” says Goncharov, the new director of the LLE theory division.

Two other enhancements were made at LLE: the quality of the target shell was improved to make it more easily compressed, and the diagnostics for measuring what’s taking place within the shell have gotten better. Researchers are now able to capture x-ray images of the target’s implosion with frame times of 40 trillionths of a second, giving them information on how to more precisely adjust the lasers and understand the physics.

“What we’ve done is show the advantages of a direct-drive laser in the nuclear fusion process,” said Campbell. “And that should lead to additional research opportunities, as well as continued progress in the field.”

Bose says the next step is to develop theoretical estimates of what is taking place in the target shell as it’s being hit by the laser. That information will help scientists make further enhancements.