Silicon
Photonics’
LastMeter
Problem
Economics and physics still
pose challenges to “fiber
to the processor” tech
By ANTHONY F.J. LEVI
NORTH AMERICAN
|
SEP 2018
|
39
If you think we’re on the cusp of a technological revolution
today, imagine what it felt like in the mid-1980s. Silicon chips
used transistors with micrometer-size features. Fiber-optic
systems were zipping trillions of bits per second around
the world.
With the combined might of silicon digital logic, optoelectronics, and optical-f iber communication, anything
seemed possible.
Engineers envisioned all of these advances continuing and converging to the point where photonics would
merge with electronics and eventually replace it. Photonics
would move bits not just across countries but inside data
centers, even inside computers themselves. Fiber optics
would move data from chip to chip, they thought. And even
those chips would be photonic: Many expected that someday blazingly fast logic chips would operate using photons
rather than electrons.
It never got that far, of course. Companies and governments
plowed hundreds of millions of dollars into developing new
photonic components and systems that link together racks
of computer servers inside data centers using optical fibers.
And indeed, today, those photonic devices link racks in many
modern data centers. But that is where the photons stop.
Within a rack, individual server boards are still connected
to each other with inexpensive copper wires and high-speed
electronics. And, of course, on the boards themselves, it’s
metal conductors all the way to the processor.
Attempts to push the technology into the servers themselves,
to directly feed the processors with fiber optics, have foundered
on the rocks of economics. Admittedly, there is an Ethernet optical transceiver market of close to US $4 billion per year that’s
set to grow to nearly $4.5 billion and 50 million components
by 2020, according to market research firm L
ightCounting. But
photonics has never cracked those last few meters between the
data-center computer rack and the processor chip.
Nevertheless, the stupendous potential of the technology
has kept the dream alive. The technical challenges are still
formidable. But new ideas about how data centers could be
designed have, at last, offered a plausible path to a photonic
revolution that could help tame the tides of big data.
A n y time you access the Web, stream television, or do
nearly anything in today’s digital world, you are using data
that has flowed through photonic transceiver modules. The
job of these transceivers is to convert signals back and forth
between electrical and optical. These devices live at each
end of the optical fibers that speed data within the data centers of every major cloud service and social media company.
The devices plug into switchgear at the
top of each server rack, where they convert optical signals to electrical ones
for delivery to the group of servers in
that rack. The t ransceivers also convert
data from those servers to optical signals for transport to other racks or up
through a network of switches and out
to the Internet.
Each photonics transceiver module
has three main kinds of components:
a transmitter containing one or more
optical modulators, a receiver containing one or more photodiodes, and CMOS
logic chips to encode and decode data.
Because ordinary silicon is actually lousy
at emitting light, the photons come from
SILICON
a laser that’s separate from the silicon
PHOTONICS
CIRCUITRY
chips (though it can be housed in the
same package with them). Rather than
switch the laser on and off to represent
bits, the laser is kept on, and electronic
bits are encoded onto the laser light by
an optical modulator.
SILICON SUPPORT OPTICAL MODULATORS OPTICAL
PHOTO
CIRCUITRY
(Laser not shown)
FIBER
DETECTORS
This modulator, the heart of the transmitter, can take a few forms. A particuPLUG AND PLAY: A silicon photonics module converts electronic data to photons
larly nice and simple one is called the
and back again. Silicon circuits [light blue] help optical modulators [bottom row, left]
Mach-Zehnder
modulator. Here, a narencode electronic data into pulses of several colors of light. The light travels through
row silicon waveguide channels the
an optical fiber to another module, where photodetectors [gray] turn the light back
into electronic bits. These are processed by the silicon circuits and sent on to the
laser’s light. The guide then splits in
appropriate servers.
two, only to rejoin a few millimeters later.
Ordinarily, this diverging and converg40
|
SEP 2018
|
NORTH AMERICAN
|
SPECTRUM.IEEE.ORG
PREVIOUS PAGES: GETTY IMAGES; THIS PAGE: LUXTERA (PHOTO)
Inside a
Photonics
Module
But, still, couldn’t chipmakers just integrate the modulator and accept that the chip will have fewer transistors? After
all, a chip can now have billions of them. Nope. The massive
amount of system function that each square micrometer of a
silicon electronic chip area can deliver makes it very expensive to replace even relatively few transistors with lower-
functioning components such as photonics.
Here’s the math. Say there are on average 100 transistors per
square micrometer. Then a photonic modulator that occupies
a relatively small area of 10µm by 10µm is displacing a circuit
comprising 10,000 transistors! And recall that a typical photonic modulator acts as a simple switch, turning light on and
off. But each individual transistor can act as a switch, turning
current on and off. So, roughly speaking, the opportunity cost
for this primitive function is 10,000:1 against the photonic component because there are at least 10,000 electronic switches
available to the system designer for
every one photonic modulator. No
chipmaker is willing to accept such a
high price, even in exchange for the
measurable improvements in performance and efficiency you might get
by integrating the modulators right
onto the processor.
The idea of substituting photonics for electronics on chips encoun100 TRANSISTORS FIT IN
1 OPTICAL MODULATOR FITS
With optica l components on
ters other snags, too. For example,
1 SQUARE MICROMETER
IN 100 SQUARE MICROMETERS
silicon integrated circuits becoming
there are critical on-chip functions,
increasingly available, you might be
such as memory, for which photonPHOTONICS FAIL: Photonics will never be a
tempted to think that the integration
ics
has no comparable capability.
real option to transport data from one part of
of photonics directly into processor
The upshot is that photons are sima silicon chip to another. A single optical switch,
a ring oscillator in this case, performs the same
chips was inevitable. And indeed, for a
ply incompatible with basic comfunction as an individual transistor, but it takes
time it seemed so. [See “Linking With
puter chip functions. And even when
up 10,000 times as much area.
Light,” IEEE Spectrum, October 2001.]
they are not, integrating a competYou see, what had been entirely
ing photonic function on the same
underestimated, or even ignored, was the growing mis- chip as electronics makes no sense.
match between how quickly the minimum size of features
on electronic logic chips was shrinking and how limited pho- Th at’s not to say photonics can’t get a lot closer to protonics was in its ability to keep pace. Transistors today are cessors, memory, and other key chips than it does now. Today,
made up of features only a few nanometers in dimension. In the market for optical interconnects in the data center focuses
7-nanometer CMOS technology, more than 100 transistors on systems called top-of-rack (TOR) switches, into which the
for general-purpose logic can be packed onto every square photonic transceiver modules are plugged. Here at the top
micrometer of a chip. And that’s to say nothing of the maze of of 2-meter tall racks that house server chips, memory, and
complex copper wiring above the transistors. In addition to other resources, fiber optics link the TORs to each other via
the billions of transistors on each chip, there are also a dozen a separate layer of switches. These switches, in turn, connect
or so levels of metal interconnect needed to wire up all those to yet another set of switches that form the data center’s gatetransistors into the registers, multipliers, arithmetic logic units, way to the Internet.
and more complicated things that make up processor cores
The faceplate of a typical TOR, where transceiver modules
and other crucial circuits.
are plugged in, gives a good idea of just how much data is
The trouble is that a typical photonic component, such as a in motion. Each TOR is connected to one transceiver modmodulator, can’t be made much smaller than the wavelength ule, which is in turn connected to two optical fibers (one
of the light it’s going to carry, limiting it to about 1 micrometer to transmit and one to receive). Thirty-two modules, each
wide. There is no Moore’s Law that can overcome this. It’s not with 40-gigabit-per-second data rates in each direction, can
a matter of using more and more advanced lithography. It’s be plugged into a TOR’s 45-millimeter-high faceplate, allowsimply that electrons—having a wavelength on the order of ing for as many as 2.56 terabits per second to flow between
few nanometers—are skinny, and photons are fat.
the two racks.
ing wouldn’t affect the light output, because both branches
of the waveguide are the same length. When they join up, the
light waves are still in phase with each other. However, voltage applied to one of the branches has the effect of changing
that branch’s index of refraction, effectively slowing down
or speeding up the light’s wave. Consequently, when light
waves from the two branches meet up again, they destructively interfere with each other and the signal is suppressed.
So, if you vary a voltage on that branch, what you’re actually
doing is using an electrical signal to modulate an optical one.
The receiver is much simpler; it’s basically a photodiode
and some supporting circuitry. After traveling through an
optical fiber, light signals reach the receiver’s germanium or
silicon germanium photodiode, which produces a voltage
with each pulse of light.
Both the transmitter and receiver are backed up by circuitry
that does amplification, packet processing, error correction, buffering,
and other tasks to comply with the
Gigabit Ethernet standard for optical
fiber. How much of this is on the same
chip as the photonics, or even in the
same package, varies according to
the vendor, but most of the electronic
logic is separate from the photonics.
SPECTRUM.IEEE.ORG
|
NORTH AMERICAN
|
SEP 2018
|
41
TOR SWITCH
SERVERS
POWER SHELF
MEMORY
TOR
SWITCH
POWER SHELF
MEMORY
POWER SHELF
SERVERS
SERVER RACK
Data-Center Design
TODAY: Photonics slings data through a multitiered network in
the data center. The link to the Internet is at the top (core) level.
Switchgear moves data via optical fibers to top-of-rack (TOR)
switches, which sit atop each rack of servers.
But the flow of data within the rack and inside the servers
themselves is still done using copper wires. That’s unfortunate, because they are becoming an obstacle to the goal of
building faster, more energy-efficient systems. Photonic solutions for this last meter (or two) of interconnect—either to the
server or even to the processor itself—represent possibly the
best opportunity to develop a truly high-volume optical component market. But before that can happen, there are some
serious challenges to overcome in both price and performance.
So-called fiber-to-the-processor schemes are not new. And
there are many lessons from past attempts about cost, reliability, power efficiency, and bandwidth density. About 15 years
ago, for example, I contributed to the design and construction of an experimental transceiver that showed very high
bandwidth. The demonstration sought to link a parallel fiberoptic ribbon, 12 fibers wide, to a processor. Each fiber carried
digital signals generated separately by four vertical-cavity
surface-emitting lasers (VCSELs)—a type of laser diode that
shines out of the surface of a chip and can be produced in
greater density than so-called edge-emitting lasers. The four
VCSELs directly encoded bits by turning light output on and
off, and they each operated at different wavelengths in the
same fiber, quadrupling that fiber’s capacity using what’s
called coarse wavelength-division multiplexing. So, with
each VCSEL streaming out data at 25 Gb/s, the total bandwidth of the system would be 1.2 Tb/s. The industry standard today for the spacing between neighboring fibers in a
12-fiber-wide array is 0.25 mm, giving a bandwidth density
of about 0.4 Tb/s/mm. In other words, in 100 seconds each
42
|
SEP 2018
|
NORTH AMERICAN
|
SPECTRUM.IEEE.ORG
TOMORROW: Photonics could
facilitate a change in data-center
architecture. Rack-scale architecture
would make data centers more
flexible by physically separating
computers from their memory
resources and connecting them
through an optical network.
millimeter could handle as much data as the U.S. Library of
Congress’s Web Archive team stores in a month.
Data rates even higher than this are needed for fiber-to-theprocesser applications today, but it was a good start. So why
wasn’t this technology adopted? Part of the answer is that
this system was neither sufficiently reliable nor practical to
manufacture. At the time, it was very difficult to make the
needed 48 VCSELs for the transmitter and guarantee that there
would be no failures over the transmitter lifetime. In fact, an
important lesson was that one laser using many modulators
can be engineered to be much more reliable than 48 lasers.
But today, VCSEL performance has improved to the extent
that transceivers based on this technology could provide effective short-reach data-center solutions. And those fiber ribbons
can be replaced with multicore fiber, which carries the same
amount of data by channeling it into several cores embedded
within the main fiber. Another recent, positive development
is the availability of more complex digital-transmission standards such as PAM4, which boosts data transmission rates
because it encodes bits on four intensities of light rather
than just two. And research efforts, such as MIT’s Shine program, are working toward bandwidth density in fiber-to-theprocessorto demonstration systems with about 17 times what
we achieved 15 years ago.
These are all major improvements, but even taken together
they are not enough to enable photonics to take the next big
leap toward the processor. However, I still think this leap can
occur, because of a drive, just now gathering momentum, to
change data-center system architecture.
ANTHONY F.J. LEVI
Today processors, memory, and storage make up what’s
called a server blade, which is housed in a chassis in a rack
in the data center. But it need not be so. Instead of placing
memory with the server chips, memory could sit separately
in the rack or even in a separate rack. This rack-scale architecture (RSA) is thought to use computing resources more
efficiently, especially for social media companies such as
Facebook where the amount of computing and memory
required for specific applications grows over time. It also
simplifies the task of replacing and managing hardware.
Why would such a configuration help enable greater penetration by photonics? Because exactly that kind of reconfigurability and dynamic allocation of resources could be
made possible by a new generation of efficient, inexpensive,
multi‑terabit-per-second optical switch technology.
gration could help. Wafer-scale integration means making
photonics on one wafer of silicon, electronics on another,
and then attaching the wafers. The paired wafers are then
diced up into chips designed to be nearly complete modules. (The laser, which is made from a semiconductor other
than silicon, remains separate.) This approach cuts manu
facturing costs because it allows for assembly and production in parallel.
Another factor in reducing cost is, of course, volume. Suppose the total optical Gigabit Ethernet market is 50 million
transceivers per year and each photonic transceiver chip occupies an area of 25 square millimeters. Assuming a foundry uses
200-mm-diameter wafers to make them and that it achieves a
100 percent yield, then the number of wafers needed is 42,000.
That might sound like a lot, but that figure actually represents less than two weeks of production in a typical foundry.
In reality, any given transceiver manufacturer might capture
25 percent of the market and still support only a few days of
production. There needs to be a path to higher volume if costs
are really going to fall. The only way to make that happen
is to figure out how to use photonics below the TOR switch,
all the way to the processors inside
the servers.
The main obstacle to the emergence of this data-center
remake is the price of components and the cost of their
manufacture. Silicon photonics already has one cost advantage, which is that it can leverage existing chip manufacturing, taking advantage of silicon’s huge infrastructure
and reliability. Nevertheless, silicon
and light are not a perfect fit: Apart
from their crippling inefficiency at
I f s i l i c o n p h o t o n i c s is ever
emitting light, silicon components
suffer from large optical losses as
going to make it big in what are other
wise all-electronic systems, there will
well. As measured by light in to light
out, a typical silicon photonic transhave to be compelling technical and
ceiver experiences greater than a
business reasons for it. The compo10-decibel (90 percent) optical loss.
nents must solve an important probThis inefficiency does not matter
lem and greatly improve the overall
PAST PERFECT: We’ve had the technology to
much for short-reach optical intersystem. They must be small, energy
bring optical fiber directly to the processor for
connects between TOR switches
efficient, and super-reliable, and they
more than a decade. The author helped conceive
this 0.4-terabit-per-second-per-millimeter
because, at least for now, the silimust move data extraordinarily fast.
demonstrator more than 15 years ago.
con’s potential cost advantage outToday, there is no solution that
weighs that problem.
meets all these requirements, and
umble, so electronics will continue to evolve without becoming intiAn important cost in a silicon photonics module is the h
yet critically important, optical connection. This is both the mately integrated with photonics. Without significant breakphysical link between the optical fiber and the transmitter or throughs, fat photons will continue to be excluded from places
receiver chip as well as the link between fibers. Many hundreds where skinny electrons dominate system function. However,
of millions of such fiber-to-fiber connectors must be manufac- if photonic components could be reliably manufactured in
tured each year with extreme precision. To understand just very high volume and at very low cost, the decades-old vision
how much precision, note that the diameter of a human hair is of fiber-to-the-processor and related architectures could
typically a little less than the 125‑µm d
iameter of a s ingle-mode finally become a reality.
We’ve made a lot of progress in the past 15 years. We have a
silica glass fiber used for optical interconnects. The accuracy
with which such single-mode fibers must be aligned in a connec- better understanding of photonic technology and where it can
tor is around 100 nm—about one one-thousandth the d
iameter and can’t work in the data center. A sustainable multibillion-
of a human hair—or the signal will become too degraded. New dollar-per-year commercial market for photonic components
and innovative ways to manufacture connectors between has developed. Photonic interconnects have become a critifibers and from fiber to transceiver are needed to meet grow- cal part of global information infrastructure. However, the
ing customer demand for both precision and low component insertion of very large numbers of photonic components
price. However, very few manufacturing techniques are close into the heart of otherwise electronic systems remains just
beyond the edge of practicality.
to being inexpensive enough.
Must it always be so? I think not. n
One way to reduce cost is, of course, to make the chips in
the optical module cheaper. Though there are other ways
to make these chips, a technique called wafer-scale inte- ↗ POST YOUR COMMENTS at https://spectrum.ieee.org/siliconphotonics0918
SPECTRUM.IEEE.ORG
|
NORTH AMERICAN
|
SEP 2018
|
43
44
|
SEP 2018
|
NORTH AMERICAN
|
SPECTRUM.IEEE.ORG
LESS FIRE,
MORE POWER
Without the needlelike growths that
can short out cells,
lithium-ion batteries
will be safer
By Weiyang Li
& Yi Cui
L I T H I U M - I O N B AT T E R I E S H AV E M A D E H E A D L I N E S
for the wrong reason: as a fire hazard. Just this past May, three
apparent battery fires in Tesla cars were reported in the United
States and Switzerland. In the United States alone, a fire in a
lithium-ion battery grounds a flight every 10 days on average,
according to the Federal Aviation Administration. And the same
problem afflicts electronic cigarettes, which have been blowing
up in people’s faces sporadically.
No other drawback has so hobbled the advance of what is by
far the most promising battery technology to emerge in our lifetimes. Lithium-ion batteries store much more energy than previous chemistries could manage, making them crucial to the future
success of phones, drones, cars, even airplanes.
Solving this problem would not only protect lives and property,
it would also make it possible to use larger battery packs with more
closely packed cells. We’d finally be able to fully exploit the great
45
ROOT AND BRANCH:
Crystalline lithium-metal structures grow out of the anode of a
lithium-ion cell in a branching
pattern, thus their name, d
endrites
(from the Greek dendron meaning
“tree”). If they grow too long, they
can short out the cell.
BROOKHAVEN NATIONAL LABORATORY/
SCIENCE SOURCE
energy-to-weight ratio, or specific energy, that this
technology allows. What’s more, we’d be able to
make progress with the next generation of batteries, the ones that use lithium metal.
The problems of today’s lithium-ion batteries can
be traced largely to dendrites, tiny threadlike structures that form on the surface of an electrode over
repeated cycles of charging and discharging. But
through our work at Dartmouth and Stanford, the
two of us have found that a little chemical tweaking
of the electrolyte can head off the pesky growths.
L I T H I U M - I O N B A T T E R Y P A C K is invariably composed of one or more compartments, or
cells, each of which has two electrodes covered by
an extremely thin polymer film, called a separator,
which prevents their coming into direct contact.
Permeating the porous separator is the electrolyte,
a material—today generally a liquid—that allows lithium ions to move back and forth during charging
and discharging.
The slightest damage to the ultrathin separator
can put the electrodes into direct contact and create an internal short circuit, which can generate
enough heat to make the cell catch fire. The heat of
the fire may then overheat adjacent cells, resulting
in a chain reaction that can easily cause the whole
battery pack to explode.
So it’s the integrity of the cell’s separator that matters most. Of course, every effort must be made during the manufacturing process to prevent damage to
the separator, but even a perfectly fabricated separator can fail in operation if dendrites later damage it.
Dendrites are sharp bits of lithium metal that grow
from the anode. These fibers can spread like kudzu
vines into the electrolyte, pierce the separator and
make their way to the cathode. It’s amazing how
46
|
SEP 2018
|
NORTH AMERICAN
|
SPECTRUM.IEEE.ORG
ILLUSTRATIONS BY
James Provost
SHARP
DENDRITES
FROM TINY
IONS GROW
Anode
SEI
Separator
Electrolyte
Cathode
CHARGING: Lithium ions move from
the cathode [right] through the separator
[middle] and on to the anode [left].
such tiny little things can cause so much destruction: They
were responsible, for example, for the fires that grounded the
worldwide fleet of Boeing 787s in 2013.
Dendrites tend to grow when the battery is overcharged,
because that’s when the lithium ions migrating into the anode
can no longer find a berth. Normally, the ions slip between the
atomic layers of the anode, a process called intercalation, but
when the space between the layers is all filled up—as can happen during overcharging—there’s nowhere else for the lithium
to go but onto the surface. There they form the seeds of a metallic crystal, which grows with each new charge-discharge cycle.
Solving this problem of dendrite growth matters not just
for today’s generation of lithium-ion batteries but also for
future batteries that will need lithium-metal anodes. That’s
because lithium metal has a high theoretical specific energy
capacity—3,860 m
illiampere-hours per gram—and a negative
electrochemical potential no other anode material can match.
A higher potential allows for a higher battery voltage, which
is just what’s needed in electric cars and in mobile devices.
Both these qualities make lithium anodes critical to battery
technologies that are still in the lab, like the highly promising lithium-sulfur and lithium-air batteries, which can store
5 to 10 times as much energy by weight as today’s lithium-ion
designs. Those future batteries may not be able to i ncorporate—
as lithium-ion batteries do—anodes made of graphite, which
has a theoretical capacity of only 372 mAh/g.
T H E F O R M AT I O N O F L I T H I U M D E N D R I T E S takes
place at the meeting point between the anode and the electrolyte, in a layer called the solid electrolyte interphase (SEI). After
OVERCHARGING: When no more room
remains for ions, any excess ions will
begin to accumulate on the surface.
enough lithium ions move into the anode and accept electrons
there, the anode finally expands enough to break the SEI layer.
From that point onward, lithium begins to form deposits at
the broken part of the SEI. And these deposits seed dendrites.
Later, during discharging, lithium ions are pulled out of
the anode, shrinking it again. The SEI layer collapses, generating more cracks and pinholes from which still more dendrites can begin to shoot out the next time the cell charges.
Also, by exposing so much metallic lithium to the electrolyte,
these cracks enable the two components to react chemically.
As lithium disappears into the resulting chemical product,
the lithium that remains for use in the cell diminishes. That
decline lowers what’s called the coulombic efficiency, which
can be determined by dividing the amount of lithium removed
from use by the amount of lithium still participating in the
reactions during each charge-discharge cycle.
Also, because they are very fragile, dendrites often break
off from the anode, generating “dead lithium” that cannot
be reused, which further lowers the coulombic efficiency
of the cell. To compensate for such losses, today’s batteries must include excess lithium, which adds substantially to
their weight and cost.
To head off dendrites, we need to shore up the SEI by forming a “super” SEI layer that’s uniform and stable. One way to
achieve this is to modify the anode’s surface by laying down
an artificial SEI layer, as it were. We’ve tried it, and it works.
Unfortunately, this approach greatly complicates the fabrication of lithium-ion cells.
Another tactic is to reformulate the electrolyte by including substances that reinforce the SEI layer. The challenge
47
CRYSTALLIZATION: Accumulating ions
form metallic crystals, which damage the
solid electrolyte interphase (SEI), where
the anode meets the electrolyte.
BREAKING AND ENTERING: Dendrites
branching out from the crystals pierce first
the SEI and then the separator, forming a bridge
to the cathode—and thus a short circuit.
HEADING OFF DENDRITES
You can avoid
dendrites by
shoring up the
solid electrolyte
interphase layer
[yellow] with a
“super” layer [light
yellow]. A simpler
way, now under
development, is
to add chemicals
to the electrolyte.
is that such additives must easily dissolve, and most candidate materials don’t—a long-standing problem for battery
researchers. The additives that do dissolve quickly are consumed during cycling, and as a result the SEI layers fall apart
over the long term.
How, then, do you find the right additive? We got our key
idea following a roundabout path.
W E ’ D B E E N C O N S I D E R I N G T H E P R O B L E M of the
lithium-sulfur battery, the futuristic technology we mentioned
earlier. What makes this combination so attractive, in theory, is
its ability to store—in the same amount of mass—more than five
times the energy of today’s lithium-ion batteries. Such
a battery uses metallic lithium for the anode and sulfur for the cathode, and during the reactions that take
place while charging or discharging there are a number
of steps that create intermediate products at the sulfur
cathode. These products, known as polysulfide ions,
are highly soluble in the electrolyte, and that means
that when the battery is in operation they can travel
from the cathode, pass through the separator, and then
arrive at the anode. That’s not good: Only the lithium
ions ought to get that far. When these polysulfide ions
hit the anode, they react vigorously with the lithium,
accept electrons, and are reduced to a solid. Not only
does this process slowly deplete sulfur from the system, it also gradually forms a coating that can wreck
the lithium anode. This has been the main difficulty
dogging the development of lithium-sulfur batteries.
To avoid this parasitic reaction, researchers have
mainly tried to restrict the polysulfide from leaching
out of the sulfur cathode in the first place. In one of our brainstorming sessions, we began to think different, as Steve Jobs
might say: What if we could actually take advantage of this
reaction? By controlling how the polysulfide ions react with
lithium, perhaps we could not just form a strong and stable
SEI layer but actually nip dendrites in the bud!
Meanwhile, we discovered that lithium nitrate—a very
commonly used lithium salt—had long been considered as a
potential electrolyte additive because of its ability to restrict—
or passivate—the reactivity of the lithium metal. Perhaps by
adding both polysulfide and lithium nitrate to the electrolyte
48
|
SEP 2018
|
NORTH AMERICAN
|
SPECTRUM.IEEE.ORG
we could create complementary actions: Polysulfide reacts
with lithium metal, while lithium nitrate can help to prevent
the lithium from reacting with polysulfide. By manipulating
these two competing reactions, we should be able to turn the
sulfur-lithium reaction from a flaw to a feature.
We added lithium polysulfide and lithium nitrate to the electrolyte in various concentrations. We studied the effect on the
process of lithium plating and stripping in a two-electrode test
cell that used lithium metal as one electrode and a s tainless-steel
foil as the other.
We assembled coin cells, also called button cells, similar
to the ones that power small electronic devices, such as
watches, calculators, and hearing aids, and we applied a
constant current during charging, allowing the same current to flow during discharge. We deposited a bit of lithium
onto the stainless steel by charging the cell; then we stripped
it off in discharge, repeating the cycle many times. Finally,
we took the cell apart and examined the lithium deposit
under a scanning electron microscope.
What we saw was intriguing. Without electrolyte additives,
the plated lithium formed structures that were thin, sharp,
and fiberlike—dendritic, in a word. But when we added lithium
nitrate to the electrolyte, the deposited lithium was thicker,
less sharp, and shaped more or less like a n
oodle. The lithium
nitrate had moderated dendrite growth but not prevented it.
Next, we added both lithium polysulfide and lithium nitrate
to the electrolyte in various quantities. At just the right balance of additives, the synergistic effect we’d sought came
through: No harmful dendritic structures grew. Instead we got
flat, pancake-shaped lithium deposits. Even after hundreds
of charge-discharge cycles, the surface of the plated lithium
was still flat, without any dendritic structures.
Besides heading off dendrites, our two additives together
greatly enhanced the coulombic efficiency and the cycling
stability. The coulombic efficiency was better than 99 percent over more than 300
charge-discharge cycles. Charging caused
plating on only a tiny bit more lithium than
was stripped off during discharge. In contrast, with lithium nitrate alone, coulombic efficiency drops to less than 92 percent
after just 180 cycles, and with polysulfide
alone it’s only about 80 percent.
These two additives, working together, bring a huge improvement because of their effect on the SEI layer. To figure out the
exact mechanism of that effect, we used a technique called
X-ray photoelectron spectroscopy and also conventional
electron microscopy to deduce the structure and chemical
composition of the SEI layer. In cells using one or the other
additive, we found that the SEI layers were marred by lots
of cracks and pinholes. When both additives were present,
though, we got a flat, uniform SEI. And the chemical breakdown of the SEI layers confirmed that the two additives indeed
had competing effects.
When we added both lithium nitrate and polysulfide, the
lithium nitrate was the first to react with the lithium metal,
and it did passivate the metal’s surface, as expected, drastically
reducing the metal’s reaction with the polysulfide. The product
from the first reaction formed mainly in the upper layer of the
SEI, and it effectively suppressed the formation of dendrites.
This technique for preventing the growth of dendrites is
still in its early days. We have problems to solve before we
can think about commercialization. A particular difficulty is
finding the precise formulation of the electrolyte additives
for each of the several different kinds of lithium batteries.
But this new strategy holds out the promise not only of
creating a safer, higher-energy lithium-ion battery but also
of paving the way for next-generation battery chemistries.
With dendrites defeated, a lithium-metal design could store
far more energy than today’s batteries while lasting through
the many charging cycles that consumer products require. We
predict that in another 5 to 10 years, our technology will allow
the commercialization of safe, superhigh-capacity batteries for
phones, laptops, cars, and airplanes. That would make headlines for rechargeable batteries of a much more positive sort. n
↗ POST YOUR COMMENTS online at https://spectrum.ieee.org/dendrites0918
STANFORD UNIVERSITY
HIGH THREAD COUNT:
Dendrites take on a
threadlike form as they
grow on the surface of
the electrode.
RED
LIGHT,
GREEN
LIGHT—
NO
LIGHT
Tomorrow’s communicative cars
could take turns at intersections
By Ozan K. Tonguz • • • Photography by Dan Saelinger
24
|
OCT 2018
|
NORTH AMERICAN
|
SPECTRUM.IEEE.ORG
SPECTRUM.IEEE.ORG
|
NORTH AMERICAN
|
OCT 2018
|
25
○ Life is short, and it seems shorter still
when you’re in a traffic jam. Or sitting
at a red light when there’s no cross traffic at all. ○ In Mexico City, São Paolo,
Rome, Moscow, Beijing, Cairo, and
Nairobi, the morning commute can,
for many exurbanites, exceed 2 hours.
Include the evening commute and it is
not unusual to spend 3 or 4 hours on
the road every day. ○ Now suppose
we could develop a system that would
reduce a two-way daily commute time
by a third, say, from 3 to 2 hours a day.
That’s enough to save 22 hours a month,
which over a 35-year career comes to
more than 3 years.
Take heart, beleaguered commuters, because such a system
has already been designed, based on several emerging technologies. One of them is the wireless linking of vehicles. It’s
often called vehicle-to-vehicle (V2V) technology, although this
linking can also include road signals and other infrastructure.
Another emerging technology is that of the autonomous vehicle, which by its nature should minimize commuting time
(while making that time more productive into the bargain).
Then there’s the Internet of Things, which promises to connect not merely the world’s 7 billion people but also another
30 billion sensors and gadgets.
All of these technologies can be made to work together with
an algorithm my colleagues and I have developed at Carnegie
Mellon University, in Pittsburgh. The algorithm allows cars to
collaborate, using their onboard communications capabilities,
to keep traffic flowing smoothly and safely without the use of
any traffic lights whatsoever. We’ve spun the project out as a
company, called Virtual Traffic Lights (VTL), and we’ve tested
it extensively in simulations and, since May 2017, in a private
project on roads near the Carnegie Mellon campus. In July,
we demonstrated VTL technology in public for the first time,
in Saudi Arabia, before an audience of about 100 scientists,
government officials, and representatives of private companies.
The results of that trial confirmed what we had already
strongly suspected: It is time to ditch the traffic light. We have
nothing to lose except countless hours sitting in our cars while
going nowhere.
•••
The principle behind the traffic light has hardly
changed since the device was invented in 1912 and deployed
in Salt Lake City, and two years later, in Cleveland. It works
on a timer-based approach, which is why you sometimes find
26
|
OCT 2018
|
NORTH AMERICAN
|
SPECTRUM.IEEE.ORG
VTL Algorithm: Letting Cars
Control Their Own Traffic
Transceivers (using IEEE Standard 802.11p) send
out a basic safety message every tenth of a second.
The message tells recipients where the transmitting
vehicle is by latitude, longitude, and heading.
Latitude: 40.719969
Longitude: -73.844283
Cars “Elect” a Leader—
1. Each vehicle
computes its own
distance to the
intersection, the
distance of the
vehicles approaching
the intersection from
other directions, and
each vehicle’s speed,
acceleration, and
trajectory. Together
they elect one vehicle
to serve as the leader
for a certain amount
of time.
2. The leader
vehicle decides
which direction
has the rightof-way (the
equivalent of a
green light) and
which direction
has the red light.
LEADER
3. The leader
assigns the status
of a red light to its
own direction of
movement, while
giving the green
light to all the
cars in the perpen
dicular flow.
ILLUSTRATIONS BY
Anders Wenngren
40.719994, -73.844846
40.720282, -73.844712
VTL Algorithm
The Virtual Traffic
Lights (VTL)
algorithm takes
that vehicle’s data,
adds it to data from
nearby vehicles, and
compares it with
readouts from digital
mapping services.
LEADER
Then Follow Its Orders
4. After the leader’s
time is up, a car in
the perpendicular
flow becomes the
leader and does the
same thing. In this
fashion, leadership
is handed over
repeatedly.
That’s all the
algorithm needs
to decide which
vehicle gets to
go through the
intersection
(green light) and
which has to
stop (red light).
yourself sitting behind a red light at an intersection when there
are no other cars in sight. The timing can be adjusted to match
traffic patterns at different points in the commuting cycle, but
that is about all the fine-tuning you can do, and it’s not much.
As a result, a lot of people waste a lot of time. Every day.
Instead, imagine a number of cars approaching an intersection
and communicating among themselves with V2V technology.
Together they vote, as it were, and then elect one vehicle to
serve as the leader for a certain period, during which it decides
which direction is to be yielded the right-of-way—the equivalent
of a green light—and which direction has the red light.
So who has the right-of-way? It’s very simple, and deferential.
The leader assigns the status of a red light to its own direction
of movement while giving the green light to all the cars in the
perpendicular flow. After, say, 30 seconds, another car—in the
perpendicular flow—becomes the leader and does the same
thing. Thus, leadership is handed over repeatedly, in a roundrobin fashion, to fairly share the responsibility and burden—
because being the leader does involve sacrificing immediate
self-interest for the common good.
With this approach, there is no need at all for traffic lights.
The work of regulating traffic melts invisibly into the wireless
infrastructure. You would never find yourself sitting at a red
light when there was no cross traffic to contend with.
Our company’s VTL algorithm elects leaders by consulting
such parameters as the distance of the front vehicle in each
approach from the center of the intersection, the vehicles’ speed,
the number of vehicles in each approach, and so on. When all
else is equal, the algorithm elects the vehicle that’s farthest from
the intersection, so it will have ample time to decelerate. This
policy ensures that the vehicle that’s closest to the intersection
gets the right-of-way—that is, the virtual green light.
It’s important to note that VTL technology needs no camera,
radar, or lidar. It gets all the orientation it needs from a wireless
system called dedicated short-range communications. DSRC
refers to radio schemes, including dedicated bandwidth, that
were developed in the United States, Europe, and Japan between
1999 and 2008 to let nearby cars communicate wirelessly. DSRC
developers envisioned various uses, including electronic toll collection and cooperative adaptive cruise control—and also precisely
the function we are using it for, intersection collision avoidance.
Very few production cars are now equipped with DSRC transceivers (and it’s possible that emerging 5G wireless technology
will supersede DSRC). But such transceivers are readily available, and they provide all the functionality we need. These
transceivers, designed to make use of IEEE Standard 802.11p,
must each send out a basic safety message every tenth of a
second. The message tells recipients where the transmitting
vehicle is by latitude, longitude, and heading. Running on a
processor in a vehicle, our VTL algorithm takes the data from
that vehicle, throws in whatever it is receiving from neighboring vehicles, and overlays the result onto readouts from
such digital mapping services as Google Maps, Apple Maps,
or OpenStreetMap. In this way, each vehicle can compute
its own distance to the intersection as well as the distance
SPECTRUM.IEEE.ORG
|
NORTH AMERICAN
|
OCT 2018
|
27
of the vehicles approaching the intersection from the other
directions. It can also compute each vehicle’s speed, acceleration, and trajectory. That’s all the algorithm needs to decide
who gets to go through the intersection (green light) and who
has to stop (red light). And once the decision has been made,
a head-up display in each car displays the light to the driver
from a normal viewing position. Of course,
the VTL algorithm solves only the problem
of managing traffic at intersections, stop
signs, and yield signs. It doesn’t drive the
car. But when functioning within its proper
domain, VTL can do everything at a much
lower cost than autonomous vehicle technology can. Self-driving cars require far more
computing capability just to make sense of
the individual data feeds coming from their
lidar, radar, cameras, and other sensors, and
more still to fuse those feeds into a single
view of the surroundings.
Think of our method as
the substitution of a rule
of thumb for true intelligence. The VTL algorithm
lets the cars control their
own traffic much as colonies of insects and schools
of fish do. A school of fish
shifts direction all at once,
without any master conductor directing the
members of the school; instead, each fish
takes its cue from the movement of its immediate neighbors.
This is an example of a completely distributed system behavior as opposed to a centralized network behavior. With it, fleets of
vehicles in a city can manage traffic flow by
themselves without a centralized control
mechanism and without any human intervention—no police,
no traffic lights, no stop signs, and no yield signs.
W
•••
e didn’ t in vent the c oncep t of
intelligent intersections, which dates back
decades. One early idea was to place a
magnetic coil under the asphalt surface
of a road to detect the approach of vehicles along a single route to an intersection and then adjust
the duration of the green and red phases accordingly. Similarly, cameras placed at intersections can be used to count the
vehicles in each approach and compute how best to time the
lights at an intersection. But both technologies are expensive
to install and maintain and therefore only a few intersections
have been fitted out with them.
We started by running our VTL algorithm on a virtual model
for two cities: Pittsburgh and Porto, Portugal. We took traffic data
28
|
OCT 2018
|
NORTH AMERICAN
|
SPECTRUM.IEEE.ORG
from the U.S. Census Bureau and the corresponding P
ortuguese
agency, added map data from Google Maps, and fed it all into
SUMO, the Simulation of Urban Mobility, an open-source software package developed by the German Aerospace Center.
SUMO simulated the rush-hour commuting time under
two scenarios, one using the existing traffic lights, the other
using our VTL algorithm. It found that VTL
reduced the average commute to 21.3 minutes
from 35 minutes in Porto and to 18.3 minutes
from 30.7 minutes in Pittsburgh. Reductions
for people commuting into the city from the
suburbs and beyond were cut by a minimum
of 30 percent and a maximum of 60 percent.
Importantly, the variance of the commute time—
a statistical measure of how much a quantity
diverges from the mean value—was also reduced.
Those time savings came primarily for two
reasons. First, VTL eliminated the time spent
waiting at a red light when
there were no cars crossing
at right angles. Second, VTL
introduced traffic control
to every intersection, not
just those that have active
signals. So it was not necessary for cars to stop at a
stop sign, for example, when no other
cars were around.
Our simulations showed other benefits—ones that are arguably more
important than saving time. The
number of accidents was reduced by
70 percent, and—no surprise—most of
the reduction was centered at the
intersections, stop signs, and other
interchanges. Also, by minimizing the
time spent dawdling at intersections
and accelerating and decelerating, VTL measurably reduces
the average car’s carbon footprint.
So, what would it take to get VTL technology out of the lab
and into the world? To begin with, we’d have to get DSRC into
production cars. In 2014, the U.S. National Highway Traffic
Safety Administration proposed the adoption of the technology, but the Trump administration hasn’t yet implemented
the regulation, and it’s not clear what the final decision will
be. So U.S. manufacturers may now be reluctant to install
DSRC transceivers, given that they’d add cost to a car and
they’d be useful only if other cars carry them, too—the familiar chicken-and-egg problem. And until enough cars begin
to carry the devices, the scale of manufacturing will remain
low and the unit cost high. In the United States, only General
Motors has begun to put DSRC radios into cars, all of them
high-end Cadillacs. However, in Europe and Japan the outlook is a lot more favorable. A number of European automakers have committed to putting the transceivers in their
PHOTOGRAPH BY
Dan Saelinger
cars, and earlier this year in Japan, where the government
strongly supports the technology, auto giant Toyota reiterated its commitment.
And even if DSRC fails entirely, our VTL algorithm can be implemented with other wireless technologies, such as 5G or Wi-Fi.
The concept of incomplete penetration of DSRC transceivers
brings up one of the biggest potential obstacles to adoption of
our VTL technology. Could it still work even if only a certain
percentage of vehicles is equipped with DSRC? The answer is
yes, provided that governments equip existing traffic signals
with DSRC technology.
Governments may well be willing to do that, if only because
they would rather not do away with hundreds of billions of
dollars’ worth of existing signal infrastructure. To address this
problem, we’ve fitted out our Virtual Traffic Lights technology
with a short-term solution: We can upgrade existing traffic lights
so that they can detect the presence of DSRC-equipped vehicles in each approach and decide the green-red phases accordingly. The beauty of this scheme is
that all vehicles could make use of
VTL REDUCED
the same roads and intersections,
THE
AVERAGE
whether or not they are equipped
with DSRC. This approach may not COMMUTE TO
reduce commute time as much as the 21.3 MINUTES
ideal VTL solution, but even so it is at
FROM 35
least 23 percent better than the current traffic control systems, according MINUTES IN
to both our simulations and to field PORTO AND TO
trials in Pittsburgh.
18.3 MINUTES
Yet another challenge is how to hanFROM 30.7
dle pedestrians and bicyclists. Even in
a regime mandating DSRC transceiv- MINUTES
ers for all cars and trucks, we couldn’t IN PITTSBURGH
reasonably expect cyclists to install
the devices or pedestrians to carry
them. That might make it hard for those people to cross busy
intersections safely.
Our solution for the short term, while physical traffic signals
still coexist with the VTL system, is to provide pedestrians a
way to give themselves the right-of-way. Ever since January of
this year, our pilot program in Pittsburgh has provided a button to push that actuates a red light—real for the pedestrians,
and virtual for the cars—at all four approaches to the inter
section. It has worked every time.
In the longer term, the bicyclist and pedestrian challenge
might be solved with Internet of Things technology. As the IoT
expands, the day will finally come when everyone carries a
DSRC-capable device at all times.
Meanwhile, under ideal conditions, with no physical signals at all, we have demonstrated that the vehicles voting
on how to assign right-of-way can allot a portion of the signaling cycle to pedestrians. During these interludes, a virtual red light shines in all the vehicles at all four approaches,
lasting long enough for any pedestrians there to cross safely.
This preliminary solution wouldn’t be optimal for traffic
flow, and so we are also working on a method using cheap
dashboard-mounted cameras to spot pedestrians and give
them the right-of-way.
•••
Ultimately, what makes virtual traffic s ignals
so promising is the advent of self-driving vehicles. As envisioned today, such vehicles would do everything human
drivers now do—stopping at traffic lights, yielding at yield
signs, and so forth. But why automate transportation halfway? It would be far better to make such vehicles fully autonomous, managing traffic without any conventional signs or
signals. The key in achieving this goal is V2V and vehicle-toinfrastructure communications.
This matters because today’s self-driving cars are often unable
to negotiate their way into and out of busy intersections. This
is one of the hardest technical problems, and it continues to
challenge even industry leader Waymo (a subsidiary of Google’s
parent company, Alphabet).
In our simulations and field trials, we have found that autonomous vehicles equipped with VTL can manage intersections
without traffic lights or signs. Not needing to identify such
objects greatly simplifies the computer-vision algorithms that
today’s experimental self-driving cars rely on as well as the
computational hardware that runs those algorithms. These
elements, together with the sensors (especially lidar), constitute the single costliest part of the package.
Because VTL has a largely modular software architecture,
it would be easy to integrate it into the rest of an autonomous
car’s software. Furthermore, VTL can solve most, if not all, of
the hard problems related to computer vision—say, when the
sun shines straight into a camera, or when rain, snow, sandstorms, or a curving road obscure the view. To be clear, VTL
is not really competing with the technology of self-driving
cars; it is complementing it. And that alone would help to
speed up the robocar rollout.
Well before then, we hope to have our system up and running
for human-driven cars. Just this past July we staged our first public demonstration, in Riyadh, Saudi Arabia, in heat topping 43 °C
(100 °F), with devices installed in the test vehicles. Representatives from government, academia, and c orporations—including
Uber—boarded a Mercedes-Benz bus and drove through the
campus of the King Abdulaziz City for Science and Technology, crossing three intersections, two of which had no traffic lights. The bus, together with a GMC truck, Hyundai SUV,
and a Citroën car, engaged the intersections in every possible
way, and the VTL system worked every time. When one driver
deliberately disobeyed the virtual red light and attempted to
cross, our safety feature kicked in right away, setting off a flashing red light for all four approaches, heading off an accident.
I hope and believe that this was a turning point in transportation. Traffic lights have had their day. Indeed, they lasted
over a century. Now it’s time to move on. n
↗ POST YOUR COMMENTS at https://spectrum.ieee.org/v2v1018
SPECTRUM.IEEE.ORG
|
NORTH AMERICAN
|
OCT 2018
|
29
ST • I 0.2
ST • II 0.9
ST • III -0.3
ST • AVR 0.6
ST • AVL 0.0
ST • I 0.3
ST • II 0.6
ST • III 0.2
ST • AVR -0.4
ST • AVL 0.0
10:40am
10:41am
10:42am
10:43am
10:44am
10:45am
HR
Spo2
120
50
100
90
rr
30
8
93
65
23
12742.6
42.2
127 42.2
36.942.6
42.2 42.636.9
42.6 36.9
36.9
TEMP
PULSE
TEMP
TEMP
(126)
160
90
128/76
(79) rr 30
8
97/73
40
113/53(68)
118/69(81)
113/53(68) 113/53(68) 113/53(68) 113/53(68) 93/60(66)
118/69(81) 118/69(81) 118/69(81) 118/69(81)161/125(133)
93/60(66) 93/60(66) 93/60(66) 93/60(66) 113/53(68)
161/125(133)161/125(133)161/125(133)161/125(133) 118/69(81)
113/53(68) 113/53(68) 113/53(68) 113/53(68) 93/60(66)
118/69(81) 118/69(81) 118/69(81) 118/69(81)161/125(133)
93/60(66) 93/60(66) 93/60(66) 93/60(66)
161/125(133)161/125(133)161/125(133)161/125(133)
93/60(66)
) 161/125(133)
113/53(68)
118/69(81)
93/60(66)
) 161/125(133)
AI
IN
TH E
SPECTRUM.IEEE.ORG
|
NORTH AMERICAN
|
OCT 2018
|
31
IN A HOSPITAL’S INT E NSIVE CARE UNIT (ICU),
IN THE INTENSIVE CARE
UNIT, ARTIFICIAL
INTELLIGENCE
CAN KEEP WATCH
BY BEHNOOD GHOLAMI,
WASSIM M. HADDAD
& JAMES M. BAILEY
ILLUSTRATIONS BY MCKIBILLO
the sickest patients receive round-theclock care as they lie in beds with their
bodies connected to a bevy of surrounding
machines. This advanced medical
equipment is designed to keep an ailing
person alive. Intravenous fluids drip into the
bloodstream, while mechanical ventilators
push air into the lungs. Sensors attached to
the body track heart rate, blood pressure,
and other vital signs, while bedside monitors
graph the data in undulating lines. When
the machines record measurements that are
outside of normal parameters, beeps and
alarms ring out to alert the medical staff
to potential problems. • While this scene
is laden with high tech, the technology
isn’t being used to best advantage. Each
machine is monitoring a discrete part of
the body, but the machines aren’t working
in concert. The rich streams of data aren’t
being captured or analyzed. And it’s
impossible for the ICU team—critical-care
physicians, nurses, respiratory therapists,
pharmacists, and other specialists—to keep
watch at every patient’s bedside. • The ICU
of the future will make far better use of its
machines and the continuous streams of
data they generate. Monitors won’t work in isolation, but
instead will pool their information to present a comprehensive picture of the patient’s health to doctors. And that information will also flow to artificial intelligence (AI) systems,
which will autonomously adjust equipment settings to keep
the patient in optimal condition.
At our company, Autonomous Healthcare, based in H
oboken,
N.J., we’re designing and building some of the first AI systems
for the ICU. These technologies are intended to provide vigilant
and nuanced care, as if an expert were at the patient’s bedside
every second, carefully calibrating treatment. Such systems
could relieve the burden on the overtaxed staff in c ritical-care
units. What’s more, if the technology helps patients get out
of the ICU sooner, it could bring down the skyrocketing costs
of health care. We’re focusing initially on hospitals in the
United States, but our technology could be useful all around
the world as populations age and the prevalence of chronic
diseases grows.
The benefits could be huge. In the United States, ICUs are
among the most expensive components of the health care
system. About 55,000 patients are cared for in an ICU every
day, with the typical daily cost ranging from US $3,000 to
$10,000. The cumulative cost is more than $80 billion per year.
As baby boomers reach old age, ICUs are becoming increasingly important. Today, more than half of ICU patients in the
United States are over the age of 65—a demographic group
that’s expected to grow from 46 million in 2014 to 74 million by 2030. Similar trends in Europe and Asia make this a
worldwide problem. To meet the growing demand for acute
clinical care, ICUs will need to increase their capacity as well
as their capabilities. Training more critical-care specialists is
part of the solution—but so is automation. Far from replacing
humans, AI systems could become part of the medical team,
allowing doctors and nurses to deploy their skills when and
where they’re needed most.
I N I C U S T O D AY, the data from the raft of bedside monitors
is usually lost as the monitor screens refresh every few seconds. While some advanced ICUs are now trying to archive
these measurements, they still struggle to mine the data for
clinical insights.
A human doctor typically has neither the time nor the
tools to make sense of the rapidly accumulating data. But
an AI system does. It could also take actions based on the
data, such as adjusting the machines involved in crucial ICU
tasks. At Autonomous Healthcare, we’re focusing first on AI
systems that could manage a patient’s ventilation and fluids. Mechanical ventilators come into play when a patient
is sedated or suffers lung failure, a common ICU condition.
And careful fluid management maintains the proper volume of blood flowing through a patient’s circulatory system, therefore ensuring that all the tissues and organs get
enough oxygen.
Our methodologies spring from an unlikely source: the aerospace industry. Two of us, Haddad and Gholami, are aerospace
control engineers. We met at Georgia Tech’s school of aero32
|
OCT 2018
|
NORTH AMERICAN
|
SPECTRUM.IEEE.ORG
space engineering, where Haddad is a professor of dynamical systems and control and Gholami formerly worked as a
doctoral researcher. Bailey joined the collaboration in the
early 2000s when he was an associate professor of anesthesiology at the Emory University school of medicine. H
addad
and B
ailey first worked on control methods to automate anesthesia dosing and delivery in the operating room, which we
tested in clinical studies at Emory University Hospital, in
Atlanta, and Northeast Georgia Medical Center, in Gainesville, Ga. We then set our sights on more complex and broader
control problems for the ICU. In 2013, Haddad and Gholami
founded Autonomous Healthcare to commercialize our AI
systems. Gholami is the company’s CEO, Haddad is chief science advisor, and Bailey is chief medical officer.
How is aerospace similar to medicine? Both fields involve
vast amounts of data that must be processed quickly to make
decisions while lives hang in the balance, and both require
that many tasks be performed simultaneously to keep things
running smoothly. In particular, we see a role for feedback
control technology in critical-care medicine. These technologies use algorithms and feedback to modify the behavior
of engineered systems through sensing, computation, and
actuation. They have become ubiquitous in the safety-critical
systems of flight control and air traffic control.
However, there’s a key difference between an airplane and
a hospital patient. An airplane’s design and control is based
on well-established theories of mechanics and aerodynamics, whereas the human body involves highly complex biological systems that function and interact in ways we don’t
yet entirely understand.
Consider the management of mechanical ventilation. ICU
patients may require this support because of direct trauma,
lung infection, heart failure, or an inflammatory syndrome
such as sepsis. The ventilator alternates between forcing air
into the lungs and allowing the lungs to passively deflate. The
device can be dialed up or down to do all of the work or to
assist the patient’s own efforts.
The interaction between human and machine is a subtle
thing to manage. The human body has its own automatic
mechanism to govern breathing, in which the nervous system
triggers the diaphragm muscle to contract and pull downward
on the lungs, thus initiating the intake of air. The ventilator
must work with this innate drive; it should be synchronized
with the patient’s natural transitions between inhaling and
exhaling, and it should match the natural air volume of the
patient’s breathing.
Unfortunately, mismatches between the patient’s demand and
the machine’s delivery are all too common, which can cause
a patient to “fight the ventilator.” For example, a patient may
naturally need more time to inhale, but the ventilator transitions to the exhalation prematurely. This and other synchronization problems with mechanical ventilation are associated
with longer stints on the ventilator, longer stays in the ICU, and
increased risk of death. Experts don’t yet know why asynchrony
has these detrimental effects, but patients clearly experience
discomfort when trying to breathe out while the machine is
2
3
53%
1
12%
MECHANICAL
VENTILATOR
4
ADAPTIVE CONTROLLER
who need help breathing are put
on mechanical ventilators [1]. These machines push
air into the lungs, but the rhythm can get out of sync with natural breathing patterns,
causing patients to “fight the ventilator.” A smart control system could read airflow
measurements [2] and identify different types of ventilator asynchrony [3] in real time
via a machine-learning algorithm. In a fully autonomous system, an adaptive controller
[4] would constantly adjust the ventilator’s airflow to keep it in sync with the patient. As
a step toward the goal of full autonomy, a similar system could be used as a decisionsupport tool in the ICU, providing recommendations that respiratory therapists could
use to make adjustments.
BREATHING EASIER
CRITICALLY ILL PATIENTS
pushing air into their lungs, and their laboring muscles experience an additional workload. In ICUs in the United States, the
share of patients on ventilators who experience severe asynchrony has been estimated to be between 12 and 43 percent.
The first step in addressing this problem is to detect it. Experienced respiratory therapists can identify different types of
asynchrony if they continuously monitor the waveforms on
a ventilator’s display screen indicating the pressure and flow.
But in an ICU, one respiratory therapist typically oversees 10 or
more patients and can’t possibly monitor all of them constantly.
ILLUSTRATION BY
MCKIBILLO
At our company, we’ve designed a machine-learning framework that replicates that human expertise in detecting different types of asynchrony. To train our system, we used a
data set of waveforms from patients on ventilators, in which
each waveform had been evaluated by a panel of clinical
experts. Our algorithm learned the signatures of different
asynchrony types—such as a particular dip in the flow signal
at a specific point in time. In our first assessments of the algorithm’s performance, we focused on what’s called cycling
asynchrony, which is the most challenging type to detect.
SPECTRUM.IEEE.ORG
|
NORTH AMERICAN
|
OCT 2018
|
33
2
1
HEMODYNAMIC
MONITOR
INFUSION
PUMP AND IV
3
PHYSIOLOGICAL
MODEL
4
ADAPTIVE
CONTROLLER
FLUID MOVEMENTS
14
501
99
MOST ICU PATIENTS require infusion pumps and IVs [1] to drip
fluid into their veins. Getting the fluid volume right is crucial:
If levels are either too low or too high in the circulatory system, serious complications can
arise. A smart control system could track real-time measurements [2] such as arterial
blood pressure and the amount of blood pumped by the heart; the system could then feed
the data into a physiological model [3] that represents how fluids move through the body’s
blood vessels and tissues. In a fully autonomous system, an adaptive controller [4] could
continuously adjust fluid inputs to keep the patient stable. Initially, ICU physicians could use
the technology as a decision-support system that provides recommendations.
Here the ventilator’s initiation of the exhale doesn’t match
the patient’s own exhalation. The accuracy of our algorithm
in detecting cycling asynchrony in a new data set matched
that of human experts.
We’re now testing the algorithm at Northeast Georgia Medical Center’s ICU to detect respiratory asynchrony in real
patients and in real time. The technology has been incorporated into a clinical-decision support system, which is designed
to help respiratory therapists assess a patient’s needs. This
framework can also provide researchers with a tool to better understand the underlying causes of asynchrony and its
34
|
OCT 2018
|
NORTH AMERICAN
|
SPECTRUM.IEEE.ORG
impact on patients. Our long-term goal is to design mechanical ventilators that can automatically adjust their own settings in response to each patient’s needs.
W H E N Y O U P I C T U R E A N I C U , your mental image probably
includes patients with plastic bags hanging from stands by
their bedsides, fluids continually dripping into their veins
through IVs. About 75 percent of patients require such fluid
management at some point during their stay in the ICU.
However, calibrating the correct amount of fluid is far from
an exact science. Just tracking a patient’s fluid levels is a hard
ILLUSTRATION BY
MCKIBILLO
task: No existing medical sensors can directly monitor fluid
volume, so doctors rely on indirect indicators like blood pressure and urine volume. The amount of fluids that patients need
depends on their illness and medications, among other things.
Getting the fluids right is particularly important for patients
with sepsis, a life-threatening syndrome characterized by
inflammation throughout the body. In these patients, the
blood vessels dilate, thus reducing blood pressure, and fluid
leaks from the tiniest vessels, the capillaries. As a result, less
oxygen-carrying blood reaches the organs, which can cause
organs to fail and patients to die. Doctors combat sepsis by
dispensing drugs to boost blood pressure and pumping extra
fluids into patients’ circulatory systems.
It’s important to add enough f luid, but not too much—
an excess can cause complications such as pulmonary edema,
a buildup of fluid in the lungs that can interfere with breathing. Studies have shown that fluid overload is associated with
more days on mechanical ventilators, longer stays in the hospital, and higher rates of mortality.
Doctors therefore aim to maintain their patients’ fluids
at certain levels, which are based on models for an average
patient. When the doctors come through the ICU on their
rounds, they try to determine whether the patient is holding
steady at the goal level by checking the mix of gases in the
blood and monitoring blood pressure and urine output. Deciding when to add fluids and how much to add is highly subjective, and there’s considerable debate about the best practices.
An AI system could do better. Rather than basing its decisions on goals established for an average patient, it could
analyze a wide variety of physiological indicators for an individual patient in real time, and continuously dispense fluids
according to that patient’s specific needs.
At Autonomous Healthcare, we’ve developed a fully automated system that looks at indirect measurements of a
patient’s fluid levels (such as blood pressure and variation
in the volume of blood pumped out by each heartbeat) and
then feeds the data into a sophisticated physiological model.
Our system uses these measurements to assess how fluids
are moving between the body’s blood vessels and tissues,
constantly adjusting the parameters as new measurements
come in. Our proprietary adaptive controller then adjusts
the fluid-infusion settings accordingly.
One advantage of our technology is its attention to what
control engineers call closed-loop system stability, which
means that any perturbations to a normal state lead to only
small and f leeting variations. Many engineering applications use control systems that guarantee closed-loop stability—when an airplane runs into powerful turbulence, for
instance, the autopilot system compensates to keep the shaking to a minimum. However, most control systems for medical devices have no such guarantee. If doctors judge that a
sepsis patient’s fluid levels have dramatically dropped, they
might push a large volume of fluid into the bloodstream, perhaps overcompensating.
We’ve already tested our automated fluid-management system in collaboration with William Muir, a veterinary anes-
thesiologist and cardiovascular physiologist. Working with
dogs that were experiencing hemorrhages, we used our system to regulate their fluid infusions. Our system successfully
kept the dogs in stable condition as measured by the volume
of blood pumped with every heartbeat.
We need to do more testing in order to win regulatory
approval for a fully automated fluid-management system for
humans. As with our work on ventilator management, we
can start by building a decision support system for the ICU.
This “human in the loop” system will present information
and recommendations to the clinician, who will then adjust
the settings of the infusion pump accordingly.
L O O K I N G B E YO N D V E N T I L AT I O N and fluid management, other
key aspects of patient care that could be automated include
pain management and sedation. In the ICU of the future, we
envision many such clinical operations being monitored,
coordinated, and controlled by AI systems that assess each
patient’s physiological state and adjust equipment settings
in real time.
To make this vision a reality, though, it won’t be enough for
engineers to produce reliable technology. We must also find
our way through many regulatory barriers and institutional
requirements at hospitals.
Clearly, regulators need to scrutinize any new autonomous medical system. We suggest that regulatory agencies
make use of two testing frameworks commonly used in the
automotive and aerospace industries. The first is in silico trials, which test an algorithm through computer simulations.
These tests are useful only if the simulations are based on
high-fidelity physiological models, but in certain applications
this is already possible. For example, the U.S. Food and Drug
Administration recently approved the use of in silico testing
as a replacement for animal testing in efforts to develop an
artificial pancreas for diabetics.
The second useful framework is hardware-in-the-loop testing,
where hardware stands in for the object of interest, whether
it’s a jet engine or the human circulatory system. You can
then test a device—an autonomous fluid pump, say—on the
hardware platform, which will generate the same type of
data you’d see on an actual patient’s bedside monitor. These
hardware-in-the-loop trials can show that the device performs
well in real time and in the real world. Once these technologies have been proven with stand-ins for critically ill humans,
testing can begin with real patients.
To bring these technologies into hospitals, the final step is
to win the trust of the medical community. Medicine is a generally conservative environment—and for good reason. No
one wants to make changes that might threaten the health of
patients. Our approach is to prove our technologies in stages:
We’ll first commercialize decision-support systems to demonstrate their efficacy and benefits, and then move to truly
autonomous systems. With the addition of AI, we believe ICUs
can be smarter, safer, and healthier places. n
↗ POST YOUR COMMENTS at https://spectrum.ieee.org/smarticu1018
SPECTRUM.IEEE.ORG
|
NORTH AMERICAN
|
OCT 2018
|
35
Purchase answer to see full
attachment