Engineered geothermal systems have wide potential as a renewable energy source
Toni Feder
Citation: Physics Today 71, 9, 22 (2018); doi: 10.1063/PT.3.4017
View online: https://doi.org/10.1063/PT.3.4017
View Table of Contents: http://physicstoday.scitation.org/toc/pto/71/9
Published by the American Institute of Physics
Articles you may be interested in
Can carbon capture from air shift the climate change equation?
Physics Today 71, 26 (2018); 10.1063/PT.3.4018
Has elegance betrayed physics?
Physics Today 71, 57 (2018); 10.1063/PT.3.4022
Rebutting remarks on Feynman and Wheeler
Physics Today 71, 12 (2018); 10.1063/PT.3.4009
Coffee stains, cell receptors, and time crystals: Lessons from the old literature
Physics Today 71, 32 (2018); 10.1063/PT.3.4019
Homi Bhabha, master builder of nuclear India
Physics Today 71, 48 (2018); 10.1063/PT.3.4021
Time crystals in periodically driven systems
Physics Today 71, 40 (2018); 10.1063/PT.3.4020
ISSUES & EVENTS
Engineered geothermal systems
have wide potential as a
renewable energy source
A test site in Utah will focus on tackling technical barriers.
hat will it take to put geothermal
energy to use on a large scale? Iceland uses it nearly exclusively for
heat and hot water and for about a fifth
of its electricity (see related story on page
26). Many countries have geothermal
projects. But the vast stores of heat deep
beneath Earth’s surface remain largely
untapped. “If we can unlock the technologies to make extracting heat in the
subsurface technically and commercially
viable on a large scale, the promise is
huge,” says Bridget Ayling, director of
the Great Basin Center for Geothermal
Energy at the University of Nevada,
Reno. That’s why, she adds, “despite
only incremental gains over the last
40 years, the geothermal community
continues to pursue engineered geothermal systems,” or EGS, also known
as enhanced geothermal systems.
In conventional geothermal systems,
heat is harvested from hot water in deep
CLAY JONES, ENERGY AND GEOSCIENCE INSTITUTE, UNIVERSITY OF UTAH
W
rocks that have natural permeability. In
EGS, by contrast, permeability has to be
engineered, usually by injecting cold
water into rocks to open existing fractures and create new ones. The water
flows through the fractures and absorbs
heat from hot rocks before being retrieved. The hot water produced can be
used to generate electricity or heat before
it is reinjected. With sufficiently deep
drilling, the EGS method could be implemented almost anywhere and could be a
widespread source of renewable energy.
But that requires technical advances, social acceptance, and cost reduction.
Aiming to tackle the technical hurdles
to realizing EGS, the US Department
of Energy in 2014 created the Frontier
Observatory for Research in Geothermal
Energy (FORGE) initiative. This past
June DOE selected the University of
Utah from among five candidates as the
steward of a dedicated site for that pur-
pose. The Utah team, with its EGS site
located 300 kilometers south of Salt Lake
City, will receive $140 million over five
years.
In a press release announcing the
FORGE award, Secretary of Energy Rick
Perry said, “Enhanced geothermal systems are the future of geothermal energy,
and critical investments in EGS will help
advance American leadership in clean
energy innovation. Funding efforts toward the next frontier in geothermal
energy technologies will help diversify
the United States’ domestic energy portfolio, enhance our energy access, and increase our energy security.”
A test laboratory
CORE SAMPLES are studied to learn where and how to create fractures for heat
reservoirs. This mostly granitic core is about 10 centimeters in diameter.
22 PHYSICS TODAY | SEPTEMBER 2018
A single vertical well, 2297 meters deep,
was drilled at the FORGE site last year.
With the new award, additional wells
will be drilled and fractures will be stimulated to create a reservoir to test and
study the full EGS process. That includes
looking at fracture predictability, monitoring seismicity, and studying rock
characteristics relating to permeability,
geochemistry, and more.
About half the FORGE funding will
go toward drilling, infrastructure, and
seismic and other maintenance. The rest
RICK ALLIS, UTAH GEOLOGICAL SURVEY
THIS UTAH SITE was selected by the Department of Energy for dedicated research on enhanced geothermal
systems. The rig in the distance was used to drill a vertical well for preliminary scientific measurements. The site,
about 300 kilometers south of Salt Lake City, is collocated with solar and wind-energy production.
will be awarded for R&D through peerreviewed competitions to be overseen by
the Utah team.
In the US, EGS projects have often
piggybacked on conventional geothermal sites that were already producing
electricity. “The plant operators don’t
want their production to be negatively
impacted. That can be scientifically limiting,” Ayling says. “FORGE is fairly
unique in that it is independent. The best
domestic and international researchers
can test and develop innovative, cuttingedge, subsurface technologies. I am
hopeful that this will give us the opportunity to be bold and brave and to try
things that, so far, we have not been able
to try.”
Creating a reservoir where injected
water can be heated is among the toughest challenges in EGS. “We want to create
permeability by opening the existing
fractures,” explains FORGE principal
investigator Joseph Moore. “That’s why
one emphasis is on understanding the
stress field in the rocks.”
For adequate heat exchange, an EGS
reservoir requires a large network of
small fractures, a millimeter wide or less.
“You need an effective radiator for water
to percolate and circulate through,” says
Moore. “You have to avoid short circuiting,” in which the water flows out, via a
dominant fracture, too fast to be adequately warmed.
To maximize well productivity, the
researchers want to create multiple fracture networks in a single reservoir,
which can be several kilometers across.
One area of intensive research is isolation—stopping fractures from forming
at a given part of a borehole and then
stimulating them at successively deeper
locations. The stopping can be done, for
example, with a fibrous material that is
solid when injected and breaks down
and becomes soluble at high temperatures, says Susan Petty, whose Seattlebased company AltaRock Energy holds
the patent for that method of isolation.
(See also the interview with her at
http://physicstoday.org/petty.)
The EGS community is increasingly
embracing horizontal drilling, an approach adopted from the oil and gas industry. (See the articles in PHYSICS TODAY
by Michael Marder, Tadeusz Patzek, and
Scott Tinker, July 2016, page 46, and by
Brian Clark and Robert Kleinberg, April
2002, page 48.) Fractures often form vertically, so a reservoir with wellbores approaching horizontal—perpendicular to
the fractures—could be effective, says
John McLennan, a FORGE coprincipal
investigator. Drilling a deviated well at
great depth is tricky, and the change in
inclination needs to be gradual to accommodate the well casing, steel insertions
that maintain the well’s integrity. So far,
geothermal wells up to about 40° from
vertical are common, but McLennan and
others hope to get closer to horizontal.
“It opens up new opportunities for success,” he says.
Another focus is understanding how
the rocks interact with the injected water.
Will clays or other minerals form that
eventually block flow pathways? Will
cold injected water shrink the rocks so
the fractures widen? Ayling’s specialty
is reservoir characterization, including
fluid chemistry, hydrothermal alterations, and physical properties such as
rock strength and permeability. For
EGS, she says, it’s important to evaluate
the probable rock types and conditions
before deep drilling. “We still don’t
know what the best sites are for developing large-scale EGS. It’s a knowledge
gap.”
To get to useful temperatures—the
DOE’s goal is 175 °C to 225 °C—requires
going several kilometers deep, into hard
SEPTEMBER 2018 | PHYSICS TODAY 23
BRIDGET AYLING
ISSUES & EVENTS
THE NOW-DECOMMISSIONED HABANERO PILOT PLANT in South Australia’s Cooper Basin produced geothermal power
under conditions of extreme heat and pressure. The U-shaped loops allow the pipes to withstand thermal expansion and
contraction when hot geothermal fluids flow through, a design that prevents damage to the wellhead. Heat exchangers
and cooling towers are visible in the background.
rocks. Drill bits wear faster in those environments than in shallower shales and
other sedimentary rocks encountered by
the oil and gas industry. Electronics and
other materials for characterization and
ongoing monitoring have to withstand
higher temperatures. So while tools and
know-how are transferable from the oil
and gas industry, the EGS approach also
requires different—and sometimes innovative—instruments. “There are materials from the nuclear industry that survive high temperatures,” notes Petty.
“But we haven’t yet applied them in
drilling.”
Australia’s Habanero site gave (see
photo above) the EGS community experience working at great depth, high temperatures, and at very high pressures.
During a five-month test run in 2013 that
produced 1 MW of electrical energy,
Ayling injected chemical tracers there.
“Tracer testing is one of the few direct
methods where you can definitively
prove your wells are connected,” she
says. “You can also calculate your average fluid-flow velocity along with other
reservoir parameters.” The Habanero
project “was expensive, but it proved
EGS could be done under extremely
24 PHYSICS TODAY | SEPTEMBER 2018
challenging conditions.” The economics
of scaling up to a larger 50 MW power
plant did not stack up, she adds, and the
site was decommissioned in 2016.
“The ability to create multiple-fracture
sites is still a challenge,” says McLennan.
So are steering the drill bits, cutting costs
by drilling faster, working underground
at high temperatures, and making sure
subsequent wells intersect the engineered fractures optimally to retrieve the
heated water. “That is the purpose of
FORGE—to provide a test laboratory.”
Seismicity risk and social acceptance
Studies on EGS go back to the late 1970s,
when Los Alamos National Laboratory
first looked into harvesting energy
from hot, dry rocks. Budget cuts ended
that project around 1990, but a smattering of efforts carried on around the
globe. Two EGS plants in the Alsace region of France produce energy commercially: Soultz-sous-Forêts, which began
as a scientific project, has since 1986
produced about 1.7 MW of electricity for
the French grid. And 10 kilometers
away, a plant in Rittershoffen produces
24 MW of thermal energy from hightemperature geothermal water.
Besides technical and financial challenges, seismicity can also derail EGS. In
Basel, Switzerland, for example, a project
was shuttered about a decade ago after a
magnitude 3.4 earthquake, attributed by
some to EGS, damaged buildings. And
last fall a magnitude 5.5 earthquake,
which was followed by a smaller tremor,
put a halt to an EGS project in South
Korea. In that case, says Ernest Majer of
Lawrence Berkeley National Laboratory,
it’s not clear that the EGS project caused
the earthquakes, but the earthquakes
definitely cast a shadow over international EGS efforts.
Some seismicity is unavoidable during the creation of an EGS reservoir. But
Lauren Boyd, EGS program manager at
DOE, notes that unlike fracking in the oil
and gas industry, which involves injecting more liquid than is removed, the
water injected in EGS is taken out and
reinjected in a closed loop, which reduces the seismicity risk.
Brian Carey, an engineer at GNS Science’s Wairakei Research Centre in Taupo,
New Zealand, and the International Energy Agency’s executive secretary for geothermal technical collaboration, stresses
that the sustainability of EGS depends on
community acceptance. “That is probably one of the most significant aspects.”
Turn on the heat
Interest in EGS in the US and elsewhere
was rekindled by a 2006 study, The Future
of Geothermal Energy: Impact of Enhanced
Geothermal Systems (EGS) on the United
States in the 21st Century. Commissioned
by DOE and written by an MIT-led interdisciplinary panel, the report estimated
total US EGS resources of 13 million exajoules in crystalline rock formations at
depths of 3 km to 10 km. That’s roughly
equivalent to the energy in 2 quadrillion
barrels of oil, according to study chair
Jefferson Tester, now at Cornell University. Extracting a small fraction could satisfy “a major portion of the US’s primary
energy needs,” he says.
Tester says the 18-member panel was
restricted to electricity in its consideration of EGS. But EGS has more to offer.
“We lose about 90% of its energy value
by converting the geothermal heat to
electricity instead of using the heat directly,” he says.
Other factors are also converging to
raise hopes for EGS. Science has advanced
in recent years: The geophysics and geochemistry of the subsurface are better
understood, numerical modeling is more
accurate, and new tools and know-how
from the oil and gas industry can be applied. At a time when climate change is
recognized as among the greatest threats
to the planet, the prospect of low carbon
emissions enhances the approach’s appeal. And EGS could be a steady source
to complement the fluctuating power
supplied by the wind and Sun.
Peter Meier, CEO of Geo-Energie
Suisse, notes that in Switzerland, “We have
no sea, and therefore not a lot of wind.
Solar is not enough, and hydropower is
almost to capacity.” Last year the country
decided to phase out its nuclear reactors.
“That is why we are motivated to do
EGS,” he says. If it works, his company
predicts that injection–production well
pairs could provide up to 5 MW of electricity from the local rock conditions. The
company is reassessing the project’s seismicity risk and hopes to get the green
light to start construction late next year
or in early 2020.
One key attraction of EGS is its wide
applicability. Conventional geothermal
energy relies on natural convective flow.
For the US, that’s in the western states.
SEPTEMBER 2018 | PHYSICS TODAY 25
“Successful EGS will extend the geographical applicability to other parts of
the country,” says McLennan. “In many
parts of the country, you can find heat,
but you don’t have water or fractures.
EGS adds both.”
Take the Northeast, says Tester. “Imagine using EGS to take heat out of ground
and use it for heating homes.” That, of
course, would require distribution systems. “You have to be reasonably close
to the source—transporting hot water
or steam long distances is technically
possible but not economically attractive.”
The country needs to transform its infrastructure in any case, says Tester, so
why not be creative. “A lot of geothermal
failures have been because of a lack of
continuity in support and patience.” The
capital investment in EGS is high, but
systems should be designed to work for
at least two decades. And, he adds, if the
US doesn’t pursue EGS seriously, “we
are going to fall behind the rest of the
world.”
Toni Feder
Low Noise Preamplifiers
Voltage Preamplifier
• 1 MHz bandwidth
• 4 nV/√Hz input noise
• 100 M Ω input impedance
• Gain from 1 to 50,000
• RS-232 interface
• $2595 (U.S. list)
SR560
Low-Noise
Voltage Preamplifier
The SR560 offers a true-differential
(or single ended) front-end,
configurable high/low pass filtering,
and rechargeable batteries that
provide up 15 hours of line-free
operation. With a microprocessor
that ‘sleeps’, no digital noise will
contaminate your low-level analog
signals.
Current Preamplifier
• 1 MHz bandwidth
• 5 fA/√Hz input noise
• 1 pA/V maximum gain
• Adjustable DC bias voltage
• Line or battery operation
• $2595 (U.S. list)
SR570
Low-Noise
Current Preamplifier
The SR570 offers current gain up to
1 pA/V, configurable high/low pass
filtering, and input offset current
control. It can be powered from the
AC line or its built-in batteries, and
is programmable over RS-232. You
can set the SR570 for highbandwidth, low-noise, and low-drift
gain modes.
Stanford Research Systems
408-744-9040
www.thinkSRS.com
·
Physics Today
Drilling to Earth’s mantle
Susumu Umino, Kenneth Nealson, and Bernard Wood
Citation: Physics Today 66(8), 36 (2013); doi: 10.1063/PT.3.2082
View online: http://dx.doi.org/10.1063/PT.3.2082
View Table of Contents: http://scitation.aip.org/content/aip/magazine/physicstoday/66/8?ver=pdfcov
Published by the AIP Publishing
This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions. Downloaded to IP:
130.156.1.80 On: Sat, 17 Oct 2015 18:44:29
Drilling to
Earth’s
mantle
Susumu Umino, Kenneth Nealson,
and Bernard Wood
Half a century after the first efforts
to drill through oceanic crust failed,
geoscientists are ready to try again.
Susumu Umino is a petrologist at Kanazawa University in Japan,
Ken Nealson is a microbiologist at the University of Southern California
in Los Angeles, and Bernard Wood is a geochemist at the University of
Oxford in the UK. All are research professors.
JAMSTEC
I
n 1909, Croatian seismologist Andrija Mohorovičić
made a bold prediction—that Earth consists of
distinct layers of rock above its core. On analyzing an earthquake that had struck earlier that
year, he noticed that seismic waves below a
depth of about 56 km travel a few kilometers per
second faster than those above that depth. The
abrupt change in speed marks what is now known
as the Mohorovičić discontinuity, or Moho—the
boundary between Earth’s crust and upper mantle,
where a fundamental change in the rocks’ composition is thought to occur. Below the continents, the
Moho’s depth can vary from 25 km to 60 km. But underneath the oceans the crust is much thinner, and
the Moho lies tantalizingly close to the sea floor—
typically just 6 km below it.
In 1957 a group of American geoscientists led
by Harry Hess, a founder of the theory of plate tectonics, proposed an ocean drilling program dubbed
Project Mohole designed to sample a section of crust
and shallow mantle to understand its composition,
structure, and evolution. Four years later, between
March and April 1961, the team successfully recovered a 14-m-long core of hard, oceanic crust, or
“basement,” off the coast of Guadalupe, Mexico,
below 3600 m of water and 170 m of sediment.1
That demonstration, though, was the extent of
the original project’s achievement. Ocean drilling
for petroleum was in its infancy at the time. Dynamic positioning, a technology to stabilize the position of a ship during the drilling process, didn’t yet
exist, and Project Mohole lost its funding in 1966.
The Apollo program, begun almost the same time
as Mohole, recovered lunar samples within a
decade, yet despite a half century of undersea
drilling, no one has yet managed to reach the Moho.
Nonetheless, Project Mohole led to the establishment of an international collaboration in scientific ocean drilling that has continued for decades.
The collaboration is currently known as the Integrated Ocean Drilling Program (IODP); after October 2013 it will be known as the International Ocean
Discovery Program. And with the recent development of the Japanese vessel Chikyu, briefly described in box 1, the aspirations of generations of
Earth scientists to drill completely through the
oceanic crust, through the Moho, and into the upper
mantle are now technically feasible. In April 2012
IODP endorsed a plan that makes mantle drilling a
high-priority goal for the next decade.2 Surveys of
the three most promising drilling sites (see box 2)
are now underway.
Mohole to mantle
Much of the crust–mantle dynamics is understood.
The thin rocky ocean crust that covers almost twothirds of Earth’s surface forms out of magma, par-
This article
copyrighted
as indicated
the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded to IP:
36 is August
2013
PhysicsinToday
www.physicstoday.org
130.156.1.80 On: Sat, 17 Oct 2015 18:44:29
tially melted mantle rock extruded at the mid-ocean
ridge, the world’s longest volcanic chain. Along that
chain, which wraps around the planet like the seam
of a baseball for more than 65 000 km, the tectonic
plates are continually renewed by the solidified
magma as they gradually move apart. The rate at
which they spread depends on a plate’s location and
varies from less than 1 cm/yr to as much as 17 cm/yr.
Although only a fifth of the plates in today’s midocean ridge spread quickly—at more than 8 cm/yr—
more than half of today’s sea floor, and the great majority of crust subducted back into the mantle
during the past 200 million years, was produced at
those fast-spreading parts of the ridge.
As a plate moves away from the ridge, it cools
predictably with age, as outlined in figure 1. In addition, seawater enters the crust and uppermost mantle
through deep fractures, where it is heated and becomes a reactive fluid that hydrates surrounding
rock and exchanges materials with it before returning
to the ocean. While chemically altered by the fluid,
the crust and uppermost mantle may also become a
habitat for microorganisms. Elsewhere, subducting
plates drag seawater into the mantle, where the water
reduces rock viscosity and melting temperature (see
the article by Marc Hirschmann and David Kohlstedt
in PHYSICS TODAY, March 2012, page 40).
Like water, carbon is an essential material for
life, and both play critical roles in Earth’s environment. But our knowledge of the contribution of the
mantle, the largest reservoir of both components, to
the global water and carbon budgets remains totally
unconstrained in the absence of representative samples. “Representative” is the operative word. Indeed,
one can argue that samples from Earth’s mantle are
not rare. But they reach the surface heavily altered
from their original, pristine state underground.
Pieces of mantle may be entrained in the upward flow of buoyant magma and brought to the
surface during volcanic eruptions or spliced into
the continental crust during the collisions of continents. Those mantle rocks may have lost part of
their original composition through melting. They
may have become part of tectonically uplifted outcroppings that now lie exposed on the sea floor or
one of Earth’s continents; extremely large sections
of displaced crust and mantle known as ophiolite
exist in a few places around the world and are
heavily studied, partly because they’re so easily accessible. Those slices through ancient Moho are
thought to preserve a record of the melting and
crystallization process similar to what occurs at the
mid-ocean ridge (see PHYSICS TODAY, January 2005,
page 21), but because of their violent history, no
one knows where or how deep underground they
came from.
A few kilograms of fresh mantle from beneath
an intact, tectonically quiet region of oceanic crust
would provide a wealth of new information, comparable to the treasure trove obtained from the
Apollo lunar samples, on Earth’s dynamics and evolution. That’s a central motivation driving the new
Mohole to the Mantle (M2M) project.
Because of the relatively uniform architecture
of fast-spreading plates, understanding the genesis
Box 1. The challenge and the Chikyu
The deepest hole ever drilled, on the Kola Peninsula in Russia,
reaches 12.3 km underground. But under the sea floor, a more
challenging environment in which to drill, few holes exceed a
kilometer; to date, the record is 2.1 km at the Integrated
Ocean Drilling Program (IODP) site 504. The technology required to go deeper is feasible.
A hole more than a few kilometers deep tends to collapse
during drilling, under the huge load from Earth’s crust around
it; the pressure difference between crustal rock and the water
column on a hole 3 km deep is 45 MPa. To prevent or forestall
the collapse, engineers circulate drilling mud through what’s
known as a riser system that connects the borehole with an
onboard pump. One steel outer pipe surrounds an inner one,
the drill “string” through which a “core” is recovered. The mud
and drill shavings are pumped up to the ship in the space between the two pipes. With a density between seawater and
the rock surrounding the hole, the circulating mud reinforces
the walls of the hole and acts as a lubricating fluid.
The deep-sea drilling vessel Chikyu (“Earth” in Japanese),
shown here, was designed to reach down to about 7.5 km
under the sea floor and was launched in 2002. Although it’s
currently equipped with a riser system that can operate at
ocean depths of 2.5 km, work is under way to develop the
technology required to drill in ocean depths exceeding 4 km,
where the crust is coolest. (For more on IODP’s drilling activities,
see the report on page 22.)
Other aspects of the project also require further engineering. Strategies are needed for coring and logging rock samples
at temperatures near 250 °C, the hottest temperature at which
drilling equipment remains durable (coring refers to retrieving
the cylindrical rock samples, and logging refers to the recording of information, both as drilling proceeds and after the core
has been retrieved). Drill bits that can effectively cut hard, abrasive, hot rock at the end of high-tensile-strength drill strings
will also be essential. So will low-weight drilling mud, casing,
and cementing materials that work in hot conditions.
When sampling the rock retrieved from the Moho, researchers will no doubt be confronted with unprecedented challenges, such as precisely discerning what distinguishes living
and dead cells in anaerobic, high-pressure, high-temperature
regions and discriminating microflora from contaminants. Both
require creative and innovative solutions.
This article www.physicstoday.org
is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded
August 2013 Physics Today
37 to IP:
130.156.1.80 On: Sat, 17 Oct 2015 18:44:29
Earth’s mantle
Box 2. Drilling site selection
LATITUDE
Ideally, researchers would prefer
to drill in the shallowest water
Original Project Mohole
30°
that overlies the coldest oceanic
crust. But those are conflicting
requirements: In shallow water,
20°
North Arch of Hawaii
close to the mid-ocean ridge,
the crust is young and hot; farODP site 1256
10°
ther from the ridge, it’s older
and colder, but under deep
water (see figure 1). Prospective
0°
drilling sites for the Mohole to
−80°
−160° −150° −140° −130° −120° −110° −100° −90°
the Mantle Project are limited to
LONGITUDE
three candidates, all in the Pacific Ocean: the original late−7000 −6000 −5000 −4000 −3000 −2000 −1000
0
1950s drilling site off the coast
WATER DEPTH (m)
of southern California; Ocean
Drilling Program (ODP) site 1256 on the Cocos plate off the coast of Mexico; and a site just north of Hawaii.
Crust that is older than 20 million years should meet the temperature requirement, with crust cooler than
about 250 °C at Mohorovičić-discontinuity depths. The area on the Cocos plate is advantageous in that it sits
in the shallowest water and over the thinnest crust. But that crust is also warmer than 250 °C at the Moho;
major obstacles posed by the high temperature include thermal shock on the rock to be drilled and the durability of logging tools. The Baja site contains older crust, 25 million to 35 million years old, and is cooler than
250 °C at the Moho, but that site lacks modern seismic data needed to evaluate Moho characteristics. Finally,
the site off Hawaii is oldest and thus coldest, but it’s also in the deepest water. In addition, the influence of arc
volcanism and the Hawaiian plume on the crust and mantle there remains uncertain (see the article by Eugene
Humphreys and Brandon Schmandt, PHYSICS TODAY, August 2011, page 34).
and evolution of crust and mantle at one site can be
extrapolated with some confidence to much of
Earth’s surface. Earth scientists have well-developed
theoretical models of ways in which magma accretes
along ridges and becomes crystallized rock, and
such models can be tested using samples recovered
from cored sections of ocean basement.
Besides testing the models of crustal accretion
and melt movement, the drilled cores will be used
to resolve the geometry and intensity of the circulation of hot water in underground fractures and subduction zones and to document the limits and activities of the deep microbial biosphere. After the hole
is drilled, the several-kilometer-long tube of crust
and mantle removed, and the painstaking process
of logging the data completed, the hole will be used
for additional experiments. Sensors and other subsea equipment will be installed in the borehole to
monitor physical stresses, fluid movement, temperature, and pressure.
The mantle’s makeup
Earth comprises distinct chemical reservoirs. The
deepest is the core, which, constituting just under a
third of the planet’s mass, is metallic and thought to
consist mostly of iron. Above it resides the voluminous mantle, which contains some two-thirds of
Earth’s mass in three principal layers of rocky, silicate materials. The overlying continental and
oceanic crusts, by comparison, contain less than 1%
of Earth’s mass and are themselves composed of solidified magma. A better grasp of the mantle’s composition would enable researchers to place better
constraints on melting processes. It would also help
determine the nature of Earth’s asteroidal building
blocks and the time scales and physical conditions
at which they accreted to form the planet.
Current estimates of the chemical composition of
the silicate part of Earth come from mantle samples
delivered to the surface. Those “peridotites,” which
contain about 75% olivine [(Mg, Fe)2SiO4] and lesser
amounts of other silicates and oxides, vary in composition by virtue of being tectonically altered, partially
melted, or possibly infiltrated with melt from other
types of rock. Mantle peridotite typically begins to
melt at about 1300 °C and is completely molten by
1700 °C, depending on pressure. At mid-ocean ridges,
about 15% of the mantle melts and produces the volcanic rocks of the ocean floor. The residual mantle left
behind is depleted of “incompatible” elements—
those elements in the rock that melt most easily.
It’s long been recognized that silicate Earth’s
composition can be approximated as some combination of residual peridotite and the silicate melts
that form crustal rock.3 Ocean drilling offers a way
to improve our understanding of the relationship
between those two components. Currently, the best
estimate of “primitive” unmelted mantle comes
from peridotites that show the least evidence of having undergone partial melting.4 Such mantle pieces
and others whose unmelted compositions are theoretically reconstructed appear to contain certain elements in identical abundance ratios to so-called
chondritic meteorites and the Sun.
The elements of concern are the lithophilic
(rock-loving) elements such as calcium, aluminum,
This article
copyrighted
as indicated
the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded to IP:
38 is August
2013
PhysicsinToday
www.physicstoday.org
130.156.1.80 On: Sat, 17 Oct 2015 18:44:29
Meaning of the Moho
Because the vast majority of Earth’s deep interior
is inaccessible by direct sampling, our understanding of its layered structure is indirect, based largely
on seismic models that are calibrated using hightemperature and high-pressure experiments on
mantle-borne materials.
Fast-spreading oceanic crust is seismically divided into three layers, shown in figure 2, each of
which varies in the speed Vp at which compressional
Layer 1
4
VP (km/s)
8
6
Layer 2
0
DEPTH BELOW
SEA FLOOR (km)
titanium, uranium, and scandium, which remain
concentrated in the silicate part of Earth. By contrast, the mantle is depleted in siderophilic (ironloving) elements such as nickel, cobalt, platinum,
and gold, which were swept into the core as the
planet differentiated. A direct demonstration that
the mantle contains precisely chondritic ratios of
some elements and deviates from that model ratio
in others is likely to have powerful implications for
our understanding of planetary accretion and differentiation and the timing of those events in Earth’s
geological history. (See the article by Bernard Wood,
PHYSICS TODAY, December 2011, page 40.)
Currently geologists have no viable alternative
to the chondritic-Earth assumption. But in 2006,
Maud Boyet and Richard Carlson suggested that the
assumption breaks down, at least in the case of the
samarium–neodymium abundance ratio in the
primitive mantle.5 Their thesis, it’s been argued,6
may indicate that an early formed crust was lost to
space by the bombardment of meteors early in
Earth’s history. If our current chondritic-ratio models are crude and wrong in detail, a central question becomes, Exactly how wrong? Pristine samples
recovered by ocean drilling should help answer
that question.
A
−6
100° C
B
C
−8
300
°C
10
20
30
200°
50
40
60
AGE (106 years)
C
70
80
90
100
Figure 1. Oceanic crust forms at the mid-ocean ridge and slowly cools
as it ages and moves away from the ridge. A simple thermal diffusion
model predicts the temperature of the crust and upper mantle as a
function of age and depth below the sea floor. Applying appropriate
boundary conditions—a thermal diffusivity of 6 × 10−7 m2 s−1, an initial
mantle melting temperature of 1340 °C, and a sea-floor temperature of
0 °C—yields the isotherms shown in the plot. The orange bars indicate
the approximate temperature at each of the prospective drilling regions—A (the Cocos plate), B (off Baja, California), and C (off Hawaii),
discussed in box 2—near the presumed Mohorovičić-discontinuity
depth of 6 km.
waves propagate through it. Seismic waves travel at
less than 3 km/s through the top layer, which is commonly interpreted as fine-grained sediment. In the
second and third layers, respectively, the wave
speed increases up to 6.7 km/s and then levels off
around 7 km/s. In those two layers, the waves pass
from lavas and sheeted dikes—essentially vertical
intrusions of solidified magma, or basalt—and into
“gabbro,” a coarse-grained and crystalline form of
basalt rich in low-density aluminum.
Previous drilling expeditions to the hole dubbed
1256D, at one of the prospective sites for the M2M
project (see box 2), have reached more than 1.5 km
0
1
1256D
504B
4
5
6
DEPTH (km)
3
Moho
−4
−10
2
Layer 3
−2
Oceanic crust
Continental crust
Lithosphere
Partial
melting
7
8
9
Figure 2. When a seismic wave encounters an interface between materials with different acoustic impedances, some of the wave’s energy is reflected. At left, a seismic reflection image shows the layered structure of
crust beneath the Pacific Ocean. At its right are compression-wave velocity profiles Vp of the seismic waves
propagating through the layers, with zero depth referenced to the bottom of the first, sedimentary layer. The
sharp, strong reflection below the third layer marks the Mohorovičić discontinuity, or Moho, the boundary
between Earth’s crust and its lithosphere, or upper mantle. In the plot, the gray region outlines the range of
seismic velocity profiles measured through 29-million-year-old crust of partially melted mantle rock extruded
from Earth’s mid-ocean ridge. The green and red profiles are the estimated wave velocities around holes at
drilling sites 1256 (described in box 2) and 504 from ocean-bottom seismometers. (Data from ref. 9; image
courtesy of Christopher Smith-Duque.)
This article www.physicstoday.org
is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded
August 2013 Physics Today
39 to IP:
130.156.1.80 On: Sat, 17 Oct 2015 18:44:29
Earth’s mantle
below the Pacific Ocean floor, through the first two
layers.7 Of the more than 2000 exploratory holes dug
to date, that hole is also the only one that reaches
the boundary between the sheeted dikes and gabbro.8 (Another hole, dubbed 504B, at 2.1 km the
deepest oceanic hole ever made, remains too shallow for that transition because of the greater crustal
thickness where it’s located.) Judging from recovered samples, the seismic velocity gradient occurs
in the sheeted dikes and appears to be caused by
changes in the density of cracks they contain rather
than changes in rock type or grain size.
Just below the third layer is the Moho, marked
by a sharp velocity transition from about 7 km/s to
8 km/s that occurs within 500 m. It is the outermost
seismic boundary separating crust and mantle, and
its geological meaning will remain a matter of conjecture until core samples are collected. But because
the seismic transition is sharp, the discontinuity is
thought to represent the geological contact between
gabbro and the peridotites, which are more dense,
being rich in magnesium and poor in aluminum.
In other places the Moho may be a more diffuse,
complex transition zone with numerous seismic reflecting layers,9 as seen in parts of the Oman ophiolite.10 Recent seismic data in the western Pacific reveal a particularly high compression-wave velocity
ABOVE SEA FLOOR (m)
1000
100
10
BELOW SEA FLOOR (m)
0
−10
−100
−1000
102
104
106
108
1010
CELL CONCENTRATION (count/cm3)
1012
Figure 3. Microbial cells, whose number is determined by fluorescent imaging, decrease in concentration with depth in sea-floor
sediment. Open circles represent cells found below the South
Pacific Gyre, where cell numbers are very low even at the sea-floor
surface thanks to the nutrient-poor nature of the gyre. Open
triangles represent cells found at the edges of the gyre, where
more organic-compound-enriched sediments exist on the sea
floor; filled circles represent cells found in sediments elsewhere on
the Pacific sea floor. The cellular concentration decreases roughly
linearly to very low levels—about a factor of 10 or more for each
order of magnitude increase in depth below the sediment surface.
(Adapted from ref. 11.)
of 8.6 km/s and strong anisotropy with respect to direction of the waves immediately below sharply imaged Moho. Those data, from Shuichi Kodaira
(Japan Agency for Marine-Earth Science and Technology) and colleagues, seem to suggest that the reflectors sit in a preferred orientation relative to the
mid-ocean ridge axis; that orientation is produced
by inhomogeneous stresses or shearing. The orientation could be detected in drill cores and would
help answer a fundamental question in geodynamics: whether the upwelling path of hot deep mantle
material is best modeled as a passive, plate-driven
flow or an active, buoyancy-driven flow.
A biological perspective
To a biologist, the M2M project presents several interesting issues: defining the limits of life, identifying
signatures of present or past life, and learning what
new types of metabolism may be required for organisms to grow and survive. Let’s take each issue in turn.
The limits of life. What are the extremes of temperature, pressure, and nutrient depletion to which
earthly life can adapt? As one proceeds downward
from sea-floor sediment toward the Moho, when do
the conditions become so extreme that life is no
longer detectable in recovered drill-core samples?
Current evidence from a number of different laboratories indicates that microbial life, as tough as it is,
disappears rather rapidly with sediment depth:11
Few recognizable biological cells remain intact at
depths of 2 km or more below the sea floor. (For a
2009 plot of the available data, see figure 3.)
Evidence from nearly 40 years of studying thermophilic microbes indicates that they are capable of
growth up to about 115 °C, a temperature that corresponds to depths of 4 km to 5 km below the sea
floor.12 Temperatures at which cellular life could
survive may be much hotter than that, though. Fifty
years ago no one would have believed that life could
survive, let alone thrive, at 100 °C, the temperature
of many thermophile communities along the edges
of hot vents in the deep ocean. What’s more, although details provided by drill-core samples cannot help but be specific to Earth’s crust, the findings
may shed light on life-detection missions on other
planets or other extreme environments on Earth.
The search for biomolecules. Retrieved samples should offer ample information about conditions in which biomolecules, both organic and inorganic, survive or are destroyed past the point of
recognition. Life is surprisingly resilient, and ways
that it or its chemistry survives extreme conditions
belowground will be of great interest. That may be
especially true for inorganic metal isotopes—one of
our best tools for the study of ancient life—which
may offer new clues to early life far below where organic molecules are stable.
New types of metabolism. What kinds of energy sources, both organic and inorganic, are available in deep sediment and crust? It’s generally
thought that subsurface metabolism is predominantly lithotrophic—powered by inorganic energy
sources, typically hydrogen in the subsurface (for a
discussion, see reference 13). Actually, almost any
chemical capable of serving as a source of electrons
This article
copyrighted
as indicated
the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded to IP:
40 is August
2013
PhysicsinToday
www.physicstoday.org
130.156.1.80 On: Sat, 17 Oct 2015 18:44:29
can, in principle, be used by life, and thus may have
somehow influenced or driven the origin and evolution of life. And understanding where deep subsurface biota reside should provide insight into
where some of the earliest life on our planet may
have been harbored. As with the search for life beyond Earth, one can’t be sure that life in the deep
unknown will conform to our definition of life at the
surface, but it should prove fascinating to ply socalled non-Earth-centric methods that can detect life
even if it turns out to be fundamentally different
from forms with which we’re familiar.14
Almost nothing can be said with certainty other
than at some depth belowground, conditions become so severe that carbon–carbon bonds become
unstable and life as we know it disappears. And although it’s not clear what will be learned, we’ll almost certainly gain a new appreciation of many aspects of life above the surface through an analysis
of what’s below it.
The optimism of Harry Hess is worth applying
to the M2M project more generally. As he said at a
US National Academy of Sciences meeting in 1958:
Perhaps it is true that we won’t find out
as much about the Earth’s interior from
one hole as we hope. To those who raise
that objection I say, If there is not a first
hole, there cannot be a second or a tenth
or a hundredth hole. We must make a
beginning.
Fortunately, because of the petroleum industry’s efforts to mine oil offshore, there exist a well-established
infrastructure and history of hole drilling. The scientific quest is to go deeper and bring up what will
hopefully be treasure troves.
References
1. For a personal account, see J. Steinbeck, Life, 14 April
1961, p. 110.
2. M. Bickle et al., Illuminating Earth’s Past, Present, and Future: The Science Plan for the International Ocean Discovery
Program 2013–2023, Integrated Ocean Drilling Program
Management International, Washington, DC (2011).
3. A. E. Ringwood, Geochim. Cosmochim. Acta 15, 257 (1959).
4. S. R. Hart, A. Zindler, Chem. Geol. 57, 247 (1986); W. F.
McDonough, S. S. Sun, Chem. Geol. 120, 223 (1995).
5. M. Boyet, R. W. Carlson, Earth Planet. Sci. Lett. 250,
254 (2006).
6. I. H. Campbell, H. St. C. O’Neill, Nature 483, 553 (2012).
7. D. A. H. Teagle et al., Proc. Integrated Ocean Drilling
Program, vol. 335, IODP Management International,
Tokyo (2012).
8. D. S. Wilson et al., Science 312, 1016 (2006).
9. M. R. Nedimović et al., Nature 436, 1149 (2005).
10. N. Akizawa, S. Arai, A. Tamura, Contrib. Mineral. Petrol.
164, 601 (2012).
11. S. D’Hondt et al., Proc. Natl. Acad. Sci. USA 106, 11651
(2009).
12. M. T. Madigan et al., Brock Biology of Microorganisms,
13th ed., Benjamin Cummings, San Francisco (2012).
13. K. J. Edwards, K. Becker, F. Colwell, Annu. Rev. Earth
Planet. Sci. 40, 551 (2012).
14. P. G. Conrad, K. H. Nealson, Astrobiology 1, 15 (2001). ■
This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions. Downloaded to IP:
130.156.1.80 On: Sat, 17 Oct 2015 18:44:29
Physics Today
Super fracking
Donald L. Turcotte, Eldridge M. Moores, and John B. Rundle
Citation: Physics Today 67(8), 34 (2014); doi: 10.1063/PT.3.2480
View online: http://dx.doi.org/10.1063/PT.3.2480
View Table of Contents: http://scitation.aip.org/content/aip/magazine/physicstoday/67/8?ver=pdfcov
Published by the AIP Publishing
This article is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions. Downloaded to IP:
130.156.1.80 On: Sat, 17 Oct 2015 18:45:17
Super
Donald L. Turcotte, Eldridge M. Moores,
and John B. Rundle
Fractures in siltstone and
black shale in the Utica
shale, near Fort Plain, New
York. (Photograph by
Michael C. Rygel.)
Injecting large volumes of low-viscosity water helps energy producers extract oil and
gas from shales that tightly confine those fossil fuels. But the technique also confronts
technical and environmental issues.
A
fter rising steadily for decades, US annual production of natural gas peaked at
22.7 trillion cubic feet (Tcf) in 1973. For a
decade thereafter, production generally
declined as gas reservoirs became depleted. It picked up for a while after that but really
took off in 2005; by 2012 natural gas production had
risen to 25.3 Tcf.
The rapid increase in the availability of natural
gas strongly influenced gas pricing. On 1 January
2000 the wellhead price was $2.60 per thousand cf.
By 1 January 2006 the price had increased to $8.00,
but by New Year’s of 2012 it was down to $2.89. The
impressive gas production increases and price decreases over the past decade or so are primarily due
to a variety of hydraulic fracturing, or fracking, in
Donald Turcotte and Eldridge Moores are Distinguished
Professors Emeriti and John Rundle is a Distinguished
Professor, all in the department of Earth and planetary
sciences at the University of California, Davis. Rundle also
holds appointments in the physics department there and
at the Santa Fe Institute in Santa Fe, New Mexico.
which large volumes of low-viscosity water are
pumped into low-permeability (“tight”) shale formations. We call that type of hydraulic fracturing “super
fracking” to distinguish it from long-established hydraulic fracturing with low volumes of high-viscosity
water.1,2
An important consequence of the drop in natural gas prices over the past several years has been
the substitution of natural gas for coal in electric
power generation plants. As a result, carbon dioxide
emissions from power plants have been reduced by
about a factor of two. That said, we do not wish to
minimize the environmental concerns associated
with high-volume fracking. The box on page 36
spells out some of the issues.
Traditional fracking has been in use for more
than 50 years. Super fracking, which, like the traditional kind, is used for oil as well as gas production,
is a relative newcomer; it arrived on the scene about
30 years ago and became economically viable
around 1997, with profound consequences, as the
natural gas numbers cited above show. Although
our focus will be on the high-volume variant, we
This article
copyrighted
as indicated
the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded to IP:
34 is August
2014
PhysicsinToday
www.physicstoday.org
130.156.1.80 On: Sat, 17 Oct 2015 18:45:17
Figure 1. Traditional and high-volume fracking.
(a) In traditional fracking treatments, a high-viscosity
fluid creates a single hydraulic fracture through which oil
or gas (or both) migrates to the production well. (b) In
high-volume fracking, or super fracking, large volumes
of a low-viscosity liquid create a wide distribution of
hydraulic fractures. Fossil fuels can then migrate through
the fracture network to the production well. The sketch
here shows the result of a sequence of four high-volume
fracking injections. Such sequential injections would not
be possible without directional drilling, which creates a
horizontal production well in the target stratum.
a
b
3–5 km
3–5 km
will also have a few words to say about traditional
fracking. But first we turn to an examination of the
shales that house oil and gas.
Fossil fuels’ underground home
Just as sandstones are a rock equivalent of sand,
shales are a rock equivalent of mud. They can extend horizontally for more than a thousand kilometers and have a porosity of 2–20%. The shales that
are a main source for hydrocarbons are known as
black shales because of their color and organic content. Their pores are typically filled with 2–18% by
weight of carbon in organic compounds. A representative grain in a shale is less than 4 μm wide; surface-tension forces due to those fine grains strongly
restrict fluid flow.
Black shales form when large volumes of organic matter are deposited in muds beneath the sea.
If the organic carbon is to be preserved, the deposition and subsequent burial must occur under anoxic
conditions. That is one reason why some 90% of the
world’s oil originated in well-defined periods encompassing 200 million out of the past 545 million years.
The largest known region currently forming organicrich clays—future black shales—is the Black Sea.
The environment in which sediments are deposited has a thermal gradient of something like
30 °C/km. At sufficient depth, time and heat produce oil from the organic material. That oil is located in a window 2–4 km below the surface where
temperatures range from about 60 °C to about 120 °C.
At depths of 3–6 km and associated higher temperatures of around 90–180 °C, the oil breaks down to
produce gas.3
Sedimentary organic material can form oil and
gas only under anoxic conditions. Thus the deposition and burial of the organics must occur in
an environment with restricted water circulation—
otherwise, the water would oxidize the carbon in
the sediment. As noted above, the fine grains that
form shale enforce that restriction via surfacetension forces.
Natural fracking
Oil and gas formation in black shales increases fluid
pressure; the resulting hydraulic forces yield a network of fractures.4 For that natural fracking to come
about, the pore pressure must be about 85% of the
pressure generated by the weight of the overlying
rock.5 The main factors responsible for natural fractures and their orientations include tectonic activity
and the structure and mineralogy of the shale.
One consequence of natural fracking is a pervasive set of fractures, such as those shown in the
opening image. Although the granular permeability
in shales is low, it is sufficient to permit oil and gas
to flow to the closely spaced fractures, which provide pathways for vertical migration. The upward
movement reduces fluid pressure and takes the fossil fuels from their source in the black shale to reservoirs that can be exploited for production, or even
to the surface as oil and gas seeps.
An excellent example of the results of natural
fracking processes can be seen in the Monterey shale
in California, the source rock for major oil fields
in the Los Angeles, Ventura, Santa Maria, and San
Joaquin sedimentary basins.6 The northern Santa
Barbara Channel, separating the Santa Barbara coast
from California’s Channel Islands, is one of the
largest hydrocarbon seepage areas in the world.7 Oil
and gas leak upward through natural fractures and
tectonic faults in the Monterey shale. The most intense area of natural seepage is about 15 km west of
Santa Barbara at the Coal Oil Point seep field, where
the resulting oil slicks can be as much as 10 km long.
Centuries ago the earliest Spanish settlers and English explorers recorded the existence of beach tars in
the region.
In some cases, natural fracking has enabled the
direct extraction of fossil fuels from tight shale
reservoirs. More often, natural fractures and faults
allow the migration of oil and gas to high-porosity
reservoirs. Once trapped there, the oil and gas can
be extracted with traditional production wells.
However, the fraction of the oil and gas that is
recovered from the production reservoir is low, typically 20–30%.
Energy producers have tried several methods
to enhance recovery. One process involves flooding
the production reservoir: Water or another fluid introduced at so-called injection wells drives the oil
and gas to the production wells. A second process
is hydraulic fracturing. As illustrated in figure 1, the
technique involves the high-pressure injection of
water so as to create fractures in the production
This article www.physicstoday.org
is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded
August 2014 Physics Today
35 to IP:
130.156.1.80 On: Sat, 17 Oct 2015 18:45:17
Super fracking
Environmental concerns
Oil and gas production utilizing high-volume fracking
has several associated severe environmental problems.1 Those include the following:
‣ The need for large volumes of water. In some areas,
fracking significantly reduces the water available for
other purposes.
‣ Contaminated water. The water injected during
fracking is subsequently returned to the wellhead
adulterated by additives and natural contamination
such as radiogenic isotopes from the rock. In many
cases, injection wells return that water to a sedimentary layer. Such wastewater disposal creates a number
of environmental concerns, including leakage and induced seismicity.
‣ Leakage of methane gas into the atmosphere. Wells
in North Dakota’s Bakken shale, for example, produce
gas in addition to oil. At present, the site doesn’t have
enough pipeline to use all the gas extracted, so workers burn off significant quantities of it. That practice,
called flaring, is clearly undesirable in terms of air pollution and greenhouse gas production and as a waste
of a natural resource. Oil producers on the North Slope
in Alaska must reinject gas that cannot be used. Ongoing efforts may lead to a federal requirement for reinjection in North Dakota and other localities.
‣ Leakage of methane gas or other fluids into shallow aquifers. Documented leakage into shallow layers,
including aquifers, appears to be associated with the
well casing itself or with the cementing of the well casing to the rock.1 Leaks of fracking fluid from shale into
groundwater are unlikely because the high-volume
fracking injections generally occur at depths of a few
kilometers—well below groundwater aquifers, which
are no deeper than 300 m. However, fracking fluids,
flowback waters, and drilling muds have occasionally
been spilled on the ground.
‣ Triggering of damaging earthquakes. As discussed
in the main text, high-volume fracking generates numerous small earthquakes, and the possibility of a
large earthquake cannot be ruled out. However, the
largest earthquake attributed to high-volume fracking
had a magnitude of 3.6, which is too small to do surface damage. On the other hand, some larger earthquakes, including a magnitude-5.7 quake that struck
Oklahoma in 2011, have been attributed to wastewater injection.16
The documented and potential problems associated with super fracking call for regulation by state
and federal agencies—and some regulations are
already in place. Any regulatory framework, though,
must distinguish between traditional and high-volume
fracking because the environmental problems discussed in this box are not associated with traditional,
low-volume fracking.
reservoir and facilitate migration to the production
well. Traditional low-volume fracking enhances
production from high-permeability reservoirs.
High-volume super fracking is the method of choice
for extracting oil and gas from tight shale reservoirs.
ability is between 10−9 darcy and 10−7 darcy, a good
six orders of magnitude or so lower than usual for
sandstone reservoirs. Super fracking, with its large
volumes of water and high flow rates, was developed to extract oil and gas from them. Additives,
usually polyacrylamides, decrease the viscosity of
the water; the treated fluid is generally called slickwater. Typically super fracking uses 100 times as
much water as traditional fracking. The objective of
high-volume fracking is to create many open fractures relatively close together—so-called distributed damage.9 Those fractures allow oil and gas to
migrate out of the rock and to the production well.
Many of them are reactivated natural fractures that
had been previously sealed.
As illustrated in figure 1b, high-volume fracking involves drilling the production well vertically
until it reaches the target stratum, which includes
the production reservoir. Then directional drilling
extends the well horizontally into that target stratum, typically for a distance of 1–2 km. Plugs, called
packers in the industry, block off a section of the
well, and explosives perforate the well casing. It is
desirable to target reservoirs that are 3–5 km deep
to ensure that the overlying material can generate
enough pressure to drive out the oil and gas.
The slickwater, injected at high pressure
through the blocked-off, perforated well, creates
distributed hydrofractures. At the end of the fracking injection, the fluid pressure drops and a fraction
of the injected fluid flows back out of the well. Then
production begins.
Volume and viscosity
Traditional fracking generally requires 75–1000 m3
of water whose viscosity has been increased by the
addition of guar gum or hydroxyethyl cellulose. The
objective, as shown in figure 1a, is to create a single
large fracture, or perhaps a few of them, through
which oil and gas can flow to the production well.
A large volume of injected sand or other “proppant”
helps keep the fractures open. Energy producers
now routinely apply traditional fracking to granular
reservoirs, such as sandstones, that have permeabilities of 0.001–0.1 darcy. (The darcy is a measure of
fluid flux corrected for the viscosity of the fluid and
the pressure gradient driving the flow.) Indeed, analysts Carl Montgomery and Michael Smith estimate that some 80% of the producing wells in the
US have been treated with traditional fracking.8
The natural permeability of the rock allows oil
and gas to migrate to the single open fracture and
subsequently make their way to the production well.
However, traditional fracking does not successfully
increase oil and gas production from tight shale
reservoirs in which few fractures exist or in which
the natural fractures have over time been sealed by
deposition of silica or carbonates.
In tight shale formations, the granular perme-
This article
copyrighted
as indicated
the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded to IP:
36 is August
2014
PhysicsinToday
www.physicstoday.org
130.156.1.80 On: Sat, 17 Oct 2015 18:45:17
Small earthquakes
High-volume fracking creates a distribution of microseismic events that documents the complex fracture network generated by the fracking. Nowadays
something like 10% of production wells are accompanied by one or more vertical monitoring wells
that have seismometers distributed along their
lengths. Those seismometers can locate microseismic events in real time, and the data they provide
can help optimize injection rates.
Figure 2 shows a typical example, from the Barnett shale in Texas.10 The first two of four injections
produced relatively narrow clusters of seismicity,
whereas the third and fourth injections produced
much broader clusters that indicate a less localized
fracture network. A possible explanation for the difference focuses on the role of preexisting natural
fractures: The narrow clusters may result from injections into the closely spaced natural fractures,
whereas the broad clusters may reflect an extensive
new fracturing network needed to access natural
fractures.
Typically, the microearthquakes accompanying
super fracking would register in the −3 to −2 range
on the Gutenberg–Richter scale, much too small to
be felt at the surface. But the magnitude distribution
of the microearthquakes satisfies the same scaling
as tectonic earthquakes: The logarithm of the number of earthquakes with magnitude greater than m
varies linearly with m. Thus the possibility of a
larger earthquake cannot be ruled out. However, for
the microseismicity associated with high-volume
fracking, the b value (negative of the slope) is in the
1.5–2.5 range, whereas for tectonic earthquakes it’s
0.8–1.2. Extrapolating the linear relation suggests
that the probability of a magnitude-4 earthquake
arising from super fracking is something like 10−15
to 10−9, clearly very small.
We now turn to some specific examples of oil
and gas extraction from tight black shales. We first
consider the Barnett shale in Texas. The Barnett was
the site of the first high-volume fracking injections
of slickwater, a technique primarily developed by
Mitchell Energy beginning in the late 1980s. We next
consider the Bakken shale on the US–Canada border. Unlike the Barnett, the Bakken shale produces
primarily oil. We then consider the Monterey shale
in California and discuss why high-volume fracking
has not been successfully applied there.
Barnett and Bakken
The Barnett shale is a black shale that formed during
the Lower Carboniferous period, 323 million–340
million years ago. Figure 3a shows the location of
Fracking
Fracking
Fracking
Fracking
600
1
2
3
4
events
events
events
events
400
1
DISTANCE (m)
In our view, high-volume fracking is successful
only in the absence of significant preexisting fracture permeability. That’s because significant fracture
permeability would provide pathways along which
the injected fluid can flow. The result would be a
fluid pressure that is too low to create distributed
new fractures. We will return to that idea below, in
connection with the Barnett and Monterey shales,
but we acknowledge that our conclusion is certainly
not universally accepted.
2
200
3
0
4
Monitoring
well
Injection
well
−200
−400
0
200
400
600
DISTANCE (m)
800
1000
Figure 2. Small earthquakes associated with four high-volume frackings
of the Barnett shale in Texas. Each tiny “+” symbol on this microseismicity
map shows the epicenter of a microearthquake. Collectively, the symbols
reveal the distribution of fractures induced by the injected water. The
monitoring well is at the origin of the coordinate system shown. The
injection well is off to the right; the thin line shows its horizontal extent.
(Adapted from ref. 10.)
the shale, which is in the Fort Worth basin of Texas.
The organic carbon concentrations in the productive
Barnett shale range from less than 0.5% by weight
to more than 6%, with an average of 4.5%. Production depths range from about 1.5 km to 2.5 km. The
gas-producing stratum has a maximum thickness of
about 300 m, is relatively flat, and has only slight
tectonic deformations.
Most natural hydraulic fractures in the Barnett
shale have been completely sealed by carbonate
deposition.11 The bonding between the carbonate
and shale is weak, so a high-volume fracking injection can open the sealed fractures with relative
ease.12 We suggest that once opened, the natural fractures prevent subsequent high-volume fracking injections from creating distributed fractures. Instead,
the injected slickwater leaks through the natural
fractures without producing further damage.
Until being overtaken by the Marcellus shale in
the Appalachian basin, the Barnett shale was the
largest producer of tight shale gas in the US. Its annual production of 0.5 Tcf of gas is an appreciable
fraction of the total national annual production of
some 25 Tcf. In 2011 the US Department of Energy
estimated13 the accessible gas reserves in the Barnett
shale to be 43 Tcf.
The Bakken shale is a black shale located in the
Williston (also called the Western Canada) basin;
This article www.physicstoday.org
is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded
August 2014 Physics Today
37 to IP:
130.156.1.80 On: Sat, 17 Oct 2015 18:45:17
Super fracking
a
c
b
CALIFORNIA
SASKATCHEWAN
Regina
Fort
Worth
MANITOBA
CANADA
UNITED STAT
ES
TEXAS
Austin
Sacramento
NORTH
DAKOTA
MONTANA
Bismarck
Billings
Santa
Maria
basin
Barnett shale
Bakken shale
Monterey shale
Ventura basin
Los Angeles
basin
Figure 3. Show me the shale. The three maps here give the locations of important fossil-fuel-producing shales in the US. (a) The
Barnett shale is in north central Texas. (b) The Bakken shale, in the Williston basin, encompasses regions of the US and Canada.
(c) The Monterey shale extends along much of California. (Maps courtesy of Janice Fong.)
see figure 3b. It formed during the Late Devonian–
Lower Mississippian period 340 million–385 million
years ago. Unlike the Barnett shale, the Bakken has
yielded large amounts of oil. Most of it comes from
North Dakota, which now produces more oil than
any state but Texas.
The Bakken shale is mostly horizontal and has
little tectonic deformation. It consists of two black
shale layers separated by a layer of dolomite (calcium magnesium carbonate) and is the first formation in which high-volume fracking demonstrated
success at effectively extracting oil from a tight shale.
The relative contributions of the black shale layers
and the dolomite layer to production are not clear.
But it is clear that high-volume fracking is essential
for significant oil production at Bakken. The shale
typically has 5% porosity, but the bulk permeability
is very low, typically 4 × 10−9 darcy. Most natural
fractures are tightly sealed, which allows super
fracking to create distributed fractures through
which oil can migrate to production wells. The producing formation typically is 1.5–2.5 km deep and
as much as 40 m thick.
In July 2013 about 6000 producing wells, primarily horizontal, operated in the Bakken shale.
They contributed to an annual oil production rate of
300 million barrels (Mbbl), or 4.8 × 107 m3. Estimates
from DOE of the oil reserves in the Bakken shale are
3.6 billion barrels (Bbbl),13 half again as much as the
total US production of 2.37 Bbbl for the year 2012.
Monterey
The Monterey shale in California is a diverse area of
organic-rich layers of black shale alternating with
silica-rich beds derived principally from rocks and
the shells of diatoms. The Monterey shale, much
younger than the Barnett and Bakken shales,
formed 8 million–17 million years ago during the
Miocene epoch. Due to its relative youth, it has not
had the time to become a tight black shale with
sealed natural fractures. Open natural fractures are
pervasive in the Monterey shale.
Figure 3c indicates the location of the formation, which straddles the San Andreas Fault. It has
evolved in an active tectonic environment,14 and evidence of its extensive tectonic displacements can be
seen in figure 4. The deposition of the black shale
occurred in several sedimentary basins, including
Los Angeles, Ventura, and Santa Maria.
Those basins have yielded large quantities of
oil for more than 100 years. Per acre, the Los Angeles
basin, which includes the Long Beach, Huntington
Beach, and Wilmington oil fields, has been among
the world’s most productive oil regions. The total oil
extracted from California basins has been some
29 Bbbl. Annual production peaked at 394 Mbbl
in 1984 and has decreased steadily to 196 Mbbl in
2012. The Monterey shale is the source of that oil, although much of the oil has been produced from
younger strata into which the fuel migrated.
According to DOE estimates from 2011, the total
recoverable oil in the 48 contiguous states is 24 Bbbl.13
They attribute 15.4 Bbbl to the Monterey shale and
3.6 Bbbl to the Bakken shale, so the Monterey has
great potential for future petroleum production.
However, attempts to use super fracking to extract
oil there have not been successful. In our view, the
culprit is the extensive fracture permeability in
the Monterey shale that has arisen from both natural fracking and tectonic deformation. The welldeveloped fracture networks in the shale have allowed some oil to migrate and be recovered, but they
also prevent the buildup of the high fluid pressure
required for super fracking to produce distributed
fracture permeability.
Questions remain
In addition to the environmental issues spelled out
in the box on page 36, several technical concerns will
affect the long-term viability of high-volume fracking. One is the efficiency of extraction: What percent
of the oil and gas in the tight shale reservoir is recovered and at what rate? Traditional oil and gas extraction consists of three stages. The primary stage
extracts the oil and gas that flow to the vertical wells
from the reservoir in which they are trapped. Typically, it manages to recover 20–30% of the oil and
gas in the formation. The secondary stage usually
This article
copyrighted
as indicated
the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded to IP:
38 is August
2014
PhysicsinToday
www.physicstoday.org
130.156.1.80 On: Sat, 17 Oct 2015 18:45:17
Figure 4. The highly fractured Monterey shale,
exposed near the Hayward fault in Berkeley,
California. Dark bands are black shale. Tectonic
movements have brought this sedimentary rock
to the surface and rotated the beds from horizontal
to vertical. (Photograph by Eldridge Moores.)
involves flooding the target reservoir with water,
carbon dioxide, or nitrogen. Those fluids, introduced
at injection wells, drive the oil and gas to production
wells. Usually 40–50% of the oil and gas is recovered in stages one and two. The tertiary phase can
involve steam injection to soften viscous oil, acid
leaching to dissolve rock in the formation, or lowvolume fracking. All told, the three stages typically
collect 60–65% of the available oil and gas.
High-volume fracking of tight shale reestablishes the natural fracture permeability and also
produces new fractures. But the process depends on
the pressure naturally generated by the weight of
the overlying rock to drive the fluid to the production well. In its reliance on natural fractures and
pressure, high-volume fracking is similar to the primary stage of traditional extraction.
Energy consultant George King has estimated
that prior to 2006, less than 10% of trapped gas was
recovered from tight shale formations1 but that subsequent technological advances have increased the
fraction to as much as 45%. Unfortunately, the production rate declines with time. Typically, 65% of
the total production from a super-fracking well is
generated in the first year and 80% in the first two
years.1 That decline in production is considerably
greater than in traditional oil and gas wells and
requires that many high-volume fracking wells be
drilled to maintain production.
Another important question is whether super
fracking can be modified so that it is effective in extracting oil and gas from black shale reservoirs, such
as the Monterey shale, that have open natural fractures. Would it be technically feasible, for example,
to inject cement to seal the natural fractures before
carrying out high-volume fracking? And can it be
done in an environmentally responsible manner?
The use of high-volume fracking to extract large
quantities of fossil fuels is a relatively recent development. As a result, energy producers lack scientific
studies on which to base technological developments and assess environmental implications. The
physical processes associated with high-volume
fluid injection are poorly understood. Among the
issues that will require detailed study are contamination, fluid leakage, and induced seismicity. We,
along with our colleague J. Quinn Norris, are among
the few who have attempted to model super fracking;
our study is based on a type of graph-theory analysis called invasion percolation from a point source.15
High-volume fracking is such a successful tool
for economically extracting oil and gas that its use
will probably continue to expand for a long while.
We emphasize, however, that tight shale oil and gas
are nonrenewable sources of energy. Getting the
most out of the shales that confine fossil fuels will
buy some time for humankind to further develop renewable sources such as wind and solar, but it will
not erase the need to ultimately transition to them.
We thank J. Quinn Norris for his contributions to this article and acknowledge Chris Barton, Scott Hector, and David
Osleger for insightful comments and reviews of an earlier
version.
References
1. G. E. King, “Hydraulic fracturing 101,” paper presented at the Society of Petroleum Engineers Hydraulic Fracturing Technology Conference, 6–8 February 2012, available at http://fracfocus.org/sites/default
/files/publications/hydraulic_fracturing_101.pdf.
2. J. B. Curtis, Am. Assoc. Pet. Geol. Bull. 86, 1921 (2002).
3. J. M. Hunt, Petroleum Geochemistry and Geology, 2nd
ed., W. H. Freeman, New York (1996).
4. D. T. Secor, Am. J. Sci. 263, 633 (1965); J. E. Olson, S. E.
Laubach, R. H. Lander, Am. Assoc. Pet. Geol. Bull. 93,
1535 (2009).
5. T. Engelder, A. Lacazette, in Rock Joints, N. Barton,
O. Stephansson, eds., A. A. Balkema, Brookfield, VT
(1990), p. 35.
6. R. J. Behl, Geol. Soc. Am. Spec. Pap. 338, 301 (1999).
7. J. S. Hornafius, D. Quigley, B. P. Luyendyk, J. Geophys.
Res. (Oceans) 104, 20703 (1999).
8. C. T. Montgomery, M. B. Smith, J. Pet. Technol. 62(12),
26 (2010).
9. S. Busetti, K. Mish, Z. Reches, Am. Assoc. Pet. Geol.
Bull. 96, 1687 (2012); S. Busetti et al., Am. Assoc. Pet.
Geol. Bull. 96, 1711 (2012).
10. S. Maxwell, Leading Edge 30, 340 (2011).
11. J. F. W. Gale, R. M. Reed, J. Holder, Am. Assoc. Pet.
Geol. Bull. 91, 603 (2007).
12. K. A. Bowker, Am. Assoc. Pet. Geol. Bull. 91, 523 (2007).
13. US Energy Information Administration, Review of
Emerging Resources: U.S. Shale Gas and Shale Oil Plays
(July 2011), available at http://www.eia.gov/analysis
/studies/usshalegas.
14. T. Finkbeiner, C. A. Barton, M. D. Zoback, Am. Assoc.
Pet. Geol. Bull. 81, 1975 (1997).
15. J. Q. Norris, D. L. Turcotte, J. B. Rundle, Phys. Rev. E
89, 022119 (2014).
16. W. L. Ellsworth, Science 341, 142 (2013).
■
This article www.physicstoday.org
is copyrighted as indicated in the article. Reuse of AIP content is subject to the terms at: http://scitation.aip.org/termsconditions.
Downloaded
August 2014 Physics Today
39 to IP:
130.156.1.80 On: Sat, 17 Oct 2015 18:45:17
Biodiesel is on a roll,
a subsidy-led roll.
Thanks to tax-breaks for retailers and
quota exemptions for farmers, world
production has soared from about
300 000t in 1995 to nearly 7m t in 2006
- an annualised growth of 33%. Over the
coming decade, output is expected to
climb perhaps another ten-fold, which
surely makes it the world's fastest-growing bulk chemical product.
Ironically, biodiesel's rise comes a
century later than originally planned.
When inventor Rudolf Diesel showed
his eponymous compression engine at
the 1900 World's Fair in Paris, France,
it ran on peanut oil. Although he did
not name it a peanut engine, Mr Diesel
did intend his motors to run on vegetable oils, which are only one process
step removed from biodiesel. Despite
his intentions, petroleum derivatives
ended up cornering the motor fuels
markets, and for one very good reason:
even at today's $60-ish per barrel crude
prices, they cost less.
This inconvenient truth has not
stopped governments around the world
from pushing biofuels; indeed it spurs
them on. The latest move was from the
European Commission, which in
January 2007 proposed to peg biofuels'
2020 share of all transport fuels at 10%.
This comes on top of the 2003 EU
Biofuels Directive that aims for a 5.75%
share in 2010. Either target is well
22
above current levels of about 2%.
Why such interest in biofuels? For
one, many analysts believe that crude's
price party is over: that cheap oil has
permanently disappeared. This ties to
another biofuels hot button: the desire
for greater energy security. Better to
have the local farmer growing at least
some of your fuel instead of counting
on supplies from notoriously unreliable exporters in, say, Russia, the Middle
East or Venezuela. And then there are
the farmers themselves. What better
way to subsidise this numerically small
yet politically mighty group than to
hand them the keys to the nation's
petrol tank.
"*The EU has set a target for biofuels to
account for 10% of all fuels by 2020
"*Biodiesel made from rape oil creates N20
emissions over its entire lifetime
"*Growing rapeseed releases large amounts of
potent greenhouse gas N20
Powerful argument?
These seem to be solid reasons in themselves, yet the most powerful argument
for boosting biofuels, the EU says, is to
combat global warming. As the 10% by
2020 target was announced, European
Commission president Jose Barroso
crowed: 'We can say to the rest of the
world - Europe is taking the lead. You
should join us in fighting climate
change.'
But do biofuels really reduce global
warming? To answer this crucial question, SRI Consulting (SRIC) recently
sized up biodiesel made from rape oil
- Europe's and the world's predominant feedstock - versus petroleum die-
References
1 Well-to-Wheels
analysis of future
automotive fuels and
powertrains in the
European context
WELL-TO-TANK
report
version 2b, May 2006.
European Council for
Automotive R&D,
CONCAWE
and
European Commission
Joint Research Centre.
Available at http://ies.
irc.ec.europa.eu/WTW
2 L Brown es al, Atmos.
Environ. 2002, 36 (6),
917.
sel refined from crude oil - and came
to answers that, although politically
incorrect, are worth considering.
We inventoried the lifetime greenhouse-gas emissions of RME (rape methyl ester biodiesel) and petroleum
diesel, as commonly produced in
Europe, from production on through
to combustion in an automobile. For
RME, that lifetime starts with growing
rapeseed on a farm, which is then
crushed to extract oil, which is chemically processed into biodiesel, which is
burnt in an engine. For petroleum diesel, the lifetime begins as crude oil in
a well, which is produced, refined and
then also burnt. We used emissions
data for these steps from a variety of
public sources, including SRIC's own
process models of biodiesel synthesis.
The resulting inventories show that
some two-thirds of RME's greenhouse
gas emissions occur during the farming of rapeseed, where cropland emits
N20 that is 200 to 300 times as potent
Chemistry &Industry -
23 April 2007
Petroleum diesel vs biodiesel
in its global warming potential as C02
itself. N2 0 emissions have been researched heavily in recent years, and
they appear to be a function of four
main factors (see box). Fertiliser production and tilling also generate
significant carbon emissions, while
everything else in the life cycle, including electricity generation, accounts for
only about 15% of the CO2 equivalent
(CO2e) total.
Petroleum diesel, by contrast, emits
some 85% of its greenhouse gas emissions in the final use stage, from being
burnt in the engine.
Based on the results of our analysis,
it turns out that RME and petroleum
diesel are almost equal on global warming contribution per unit of energy
delivered. If rapeseed is grown on dedicated farmland, which over time is
likely to be the case, then the contest
is a draw: RME accounts for nearly the
same amount of C0 2e per kilometre
driven. However, if rapeseed is grown
on land that otherwise would be set
aside temporarily, which means they
will emit significant quantities of N 20
whether fallow or planted, RME wins
- emitting about 25% less CO2e per
kilometre driven.
Useful comparison
An even more useful comparison, however, involves comparing greenhouse
gas emissions normalised by land use,
either to grow rapeseed or trees. If petroleumn diesel were substituted for biodiesel, land would be freed up to grow
Im p a ct
s I of b oi [ai
e
some other crop, including a forest that
would function as a carbon dioxide sink.
What would this do to the greenhouse
gas balance? To answer that question,
we used figures from the well-known
EcoInvent database for the production
of air-dried, sawn hardwood. Plugging
these data into our inventory model gives
a hands-down win by a factor of almost
2:1 for petroleum diesel.
For minimum greenhouse gas emissions, set-aside arable land should
therefore be used as forest and not for
growing biodiesel. To answer the question posed at the start of this article:
no, the trade-off of substituting biofuels for fossil fuels - at least in the case
of RME versus petroleum diesel - does
not give a payoff with respect to global
warming.
Because emissions of N20 (laughing gas) are a hot topic of
research and crucially important to the study's conclusions, we
looked very carefully at four sensitivities:
"
"*Quantity of nitrogenous fertiliser used: Our figure, again from
the LCA Food Database, squares with those estimated by the
United Nations' Intergovernmental Panel on Climate Change
(IPCC) for rotated crops.
"*N20 emissions: These emissions are critical and also subject
to ongoing debate by, among others, the European Council
for Automotive R&D, the European CommissionI and the
IPCC. We have used a figure recently estimated by a group of
UK researchers 2 that accounts for soil effects as well as
whether fields are cultivated or fallow.
Broader picture
But before policymakers rush to ditch
biofuels altogether, they might want
to consider a rather broader picture.
In addition to our study of global
warming, we also compared the two
fuels in terms of a range of other environmental impacts - from eco and
biological toxicity to ozone layer depletion and acidification. To do so, we
weighted the complete emissions inventories of each system, not just
greenhouse gases, by using a commonly used impact assessment method. The answer is equivocal: petroleum
diesel comes out ahead in five categories; biodiesel comes out ahead in the
other five.
l
i
Ieu
Yield of rapeseed: We used a figure - sourced from the LCA
Food Database, published by Denmark's Ministry of Food,
Agriculture and Fisheries - that is slightly below the average
for Germany, which has the EU's highest yields. Thanks to
improved farming methods, these have nearly doubled from
their levels of the 1970s. At the same time, these
improvements require more fertiliser.
"*Organic farming: Emissions of N2 0 are substantially lower
from organic farms that use less fertiliser, but so are
rapeseed yields.
Eric Johnson, editor of Environmental
Impact Assessment Review, is based in
Zurich, Switzerland
Russell Heinen, vice president of SRI
Consulting and manager of its Process
Economics Program, is based in Houston,
Texas, US
[ero
i
Biodiesel and petroleum diesel were compared inten different environmental impact categories. The figure shows relative impacts, with one fuel set at 100%
and the other at its relative level to that. Biodiesel comes out lower (better) in five categories, petroleum diesel inthe other five.
0
120
Biodiesel
Petroleum Diesel
100
80
60
40
20
0
-20
terrestrial
ecotoxicity
marine
aquatic
ecotoxicity
Chemistry &Industry - 23 April 2007
abiotic
depletion
human
toxicity
fresh water
aquatic
ecotoxicity
eutrophication
acidification
photochemical
oxidation
ozonelayer
depletion
global warming
(GWP100)
23
COPYRIGHT INFORMATION
TITLE: The race is on
SOURCE: Chemistry & Industry no8 Ap 23 2007
PAGE(S): 22-3
The magazine publisher is the copyright holder of this article and it
is reproduced with permission. Further reproduction of this article in
violation of the copyright is prohibited. To contact the publisher:
http://sci.mond.org/
Purchase answer to see full
attachment