Con Articles: Facial Recognition Technology is Dangerous
Article 1
Facial Recognition Technology
Represents a Threat to Privacy
By: (Melinger, 2006)
Daniel J. Melinger is the cofounder of the Kamida software company.
Within the next decade, governments, employers, and marketers will probably be able to
identify people using facial recognition technology or other forms of biometrics. The U.S.
government is already testing facial recognition technology for many uses, including
protection against terrorists and criminals. The facial characteristics of most people could
be stored in huge government databases used to track and locate suspects. Biometrics
will not only be used to search for criminals but will also be employed by businesses to
monitor workers and even the buying habits and emotions of shoppers in a mall. Although
biometrics can serve good purposes, it also poses a threat to privacy because those using
the technology will be able to get information about people's locations and habits. Some
organizations, such as the American Civil Liberties Union, therefore oppose the use of
facial recognition technology.
[The following fictional scenario could be reality within the next
decade:] Jennifer had visited a friend who lives in Medford,
Massachusetts, a small suburb of Boston, for the weekend. But it was
Sunday morning and time for her to drive back home to Philadelphia.
After driving a few blocks from her friend's house, Jennifer pulls onto the
main road in the suburb, a state-managed route replete with strip malls
and chain restaurants. At the traffic light, before Jennifer turns onto the
Massachusetts state route, a camera operated by a computer server in
the state office of Homeland Security gets a glimpse of her through her
[windshield]. The camera sends a video stream to the server, which
digitally reorients Jennifer's face, calculates her faceprint, and then
begins to search for a match in the databases to which it has access.
Within a small fraction of a second, the server's logic realizes that the
person in the car doesn't match any of the entries for the 55,000
residents of the municipality or the 1.5 million residents of Middlesex
county. Then it checks her against the remainder of the Massachusetts
records—still no match. It must access the federal database, a copy of
which is stored at the state office. A few seconds after catching her on
camera, it finds that Jennifer is a resident of Philadelphia.
If needed, the server could correlate this information with a
multitude of personal data, including her travel history, federal tax
information, criminal record, and public library records. However,
Jennifer's movements and habits haven't raised any major flags before.
The server's fuzzy logic assumes she's probably just traveling out of
town on business or pleasure. Her national account's warning level will
be raised a notch to indicate that she should be more closely watched
for the next fifteen hours, but this isn't enough to warrant intervention.
On the other hand, if she took State Route 3 south along the coast and
got out of her car near Pilgrim nuclear power plant in Plymouth, her
warning level would get a little hotter.
A Way to Identify People
Facial recognition technology is hardware and software that
allows computers to identify people by their faces. It is a subset of
biometrics, other examples of which include fingerprint scans, retinal
scans, and voice identification. While humans have an innate ability to
easily recognize the faces of other humans, even within large crowds
and when factors such as emotion, facial hair, and age change the
faces' appearance, it has proven difficult to teach computers to perform
this task. However, over the past decades, much effort has been put
toward making computers better at facial recognition. Computers have
been trained to identify people based on their faceprint, a combination
of fourteen to twenty-two facial attributes, such as the distance between
the eyes, the width of the nose, the depth of the eye sockets, and
properties of the cheekbones, jaw line, and chin. The technology is
beginning to become a viable way to identify people in various
scenarios such as law enforcement, security surveillance, eliminating
voter fraud, bank transaction identity verification, and computer security.
While some activist organizations assert that facial recognition
technology is inaccurate and ineffective and will remain so for the
foreseeable future, most biometrics experts agree that the technology
already works to a large degree and will quickly become more effective.
As privacy advocacy organizations, civil liberty organizations, and some
private individuals lobby to remove facial recognition technology from
mainstream use, government institutions, private companies, and
private security firms continue to push for its use. Recently, attitudes in
the US have caused a push for greater security at the expense of
privacy. These attitudes have been influenced greatly by the events of
September 11, 2001, and the increased fears of terrorism that followed.
Government and Private Industry
In the US, various government groups are testing facial
recognition technology, pushing for the furthering of its development,
and putting it to use. In the US government, the Biometrics
Management Office of the Department of Defense, with its Biometrics
Fusion Center, is overseeing a number of initiatives with the purpose of
testing and researching facial recognition technology.... The Office was
established in 2000 to be a proponent for biometrics in the D.O.D
[Department of Defense]. Though not stated in its official mission, the
Office is also a key facilitator of the biometrics industry, bringing industry,
academic, and government stakeholders together. The Department of
Defense Counterdrug Technology Department Program Office also
sponsored the Face Recognition Technology (FERET) program at the
National Institute of Standards and Technology. This program was
established to sponsor research, to create a database of facial images,
and to evaluate the accuracy of technologies from various vendors.
Facial recognition technology is being developed by hundreds of
different companies. Identix uses technology developed by Visionics, a
company that Identix merged with in 2002. Identix bills their facial
recognition technology, dubbed FaceIt ARGUS, as revolutionizing
closed-circuit television security systems and as "plug-and-play"
[meaning users will be able to operate the system quickly and easily].
Their product literature emphasizes ease of installation, effectiveness,
and compatibility with existing security systems. Another company,
Viisage markets technology originally developed at the Massachusetts
Institute of Technology. Viisage breaks their products down into different
sectors, ranging from identity authorization in the use of personal
computers to the identity of "potential threats to public safety." After
September 11, 2001, when equities markets experienced a broad slump,
the stock prices of both Visionics and Viisage rose, showing investors'
confidence in facial recognition technology as a way to combat terrorism.
In addition to everyday use, facial recognition technology has
been utilized in several well-publicized events. In 2001, all people
entering a Tampa, Florida, stadium to attend the Superbowl had their
faces scanned in a test of Viisage's product. This resulted in a flurry of
backlash against facial recognition technology as an invasion of privacy.
In a trial seen as more positive and successful, Visionics' facial
recognition technology was used in the Mexican presidential elections of
2000 to help minimize voter fraud through multiple registrations. Both
Viisage and Visionics' products are currently being tested at Boston's
Logan Airport to allow frequent fliers to Britain to enjoy a smoother
security check process before flying.
Along with the many deployments and tests of facial recognition
technology, some organizations are fighting its use. The American Civil
Liberties Union [ACLU] opposes the use of facial recognition technology.
The ACLU claims that the technology represents an invasion of privacy
and is ineffective and that it should not be used for these reasons. The
Electronic Privacy Information Center seems to be carefully watching
the technology, and its impacts on privacy.
How the Technology Will Be Used
To assist in evaluating how the technology will be used in 2010, I
have identified four major categories of uses: (1) government
surveillance, (2) private security surveillance, (3), marketing support,
and (4) identity verification.
Government surveillance: Over the coming years, the US
government and governments around the world will employ more facial
recognition technology to watch their citizens and to look for outsiders.
This will make these countries safer places to live but will also hinder
personal liberties and privacy. Citizens, on the whole will reluctantly
accept the need for these technologies as the new reality.
Private security surveillance: Private companies will employ the
technology to watch their workers and to protect their assets and real
estate. This will help companies to stay competitive, but will affect the
privacy of individuals watched. It will also cause worker backlash in
some instances as workers show that they are not always comfortable
working in watched environments.
Marketing support: As the technologies get better, consumerdirected companies will use facial recognition technology in retail
settings and for marketing in general. Computers will begin to track
buyers' habits and emotions. Armed with this information, marketers will
be able to tailor offerings to consumers and to track consumer behavior.
This will lead toward increased ability to compete in the companies that
utilize the technology and a feeling of loss of privacy for consumers.
Identity verification: Facial recognition technology will be used
more and more to verify the identity of people. Whether it is to gain
access to an office building, to log into a computer, or to register to vote,
facial recognition technology will allow those using the technology to be
more assured of whom they are working with. While the direct effects of
these uses are no doubt positive, the secondary effect is that those
using the technology will have increased information about where
people are and about their habits.
It seems inevitable that facial recognition technology will be
developed to the point where it is significantly successful. It is
questionable as to whether the technology will be advanced enough to
identify individuals in large moving crowds by 2010, but nevertheless, it
will probably be a viable technology for many uses by this date. It is also
quite likely that facial recognition will become the biometrics technology
of choice as software and hardware gets better. Currently, other
processes, such as retinal scans are commonly used, but the
advantages of facial recognition, such as the ability to scan from a
distance, will cause more people to adopt it in the future.
Melinger, D. J. (2006). Facial Recognition Technology Represents a Threat to
Privacy. In S. A. Kallen (Ed.), At Issue. Are Privacy Rights Being Violated? San
Diego: Greenhaven Press. (Reprinted from Facial Recognition Technology,
Future of the Infrastructure, http://fargo.itp.tsoa.nyu.edu, 2002) Retrieved from
http://mylibrary.wilmu.edu/login?url=http://ic.galegroup.com/ic/ovic/ViewpointsDet
ailsPage/ViewpointsDetailsWindow?disableHighlighting=false&displayGroupNam
e=Viewpoints&currPage=&scanId=&query=&prodId=OVIC&search_within_result
s=&p=OVIC&mode=view&catId=&limiter=&displayquery=&displayGroups=&contentModules=&action=e&sortBy=&documentId=GA
LE%7CEJ3010365210&windowstate=normal&activityType=&failOverType=&com
mentary=&source=Bookmark&u=new90507&jsid=ffb28ad45d73fcb0f2b031a001f
91435
Article 2
Using Face Recognition
Technology Will Not Make
Americans Safer
By: (Agre, 2005)
Philip E. Agre maintains in the following viewpoint that automatic face recognition
technology, a form of biometrics that compares an individual's digitalized facial image to a
computerized database should be banned. The potential for abuse by government and
business is immense, he argues, while the benefit to the fight against terrorism is minimal.
Further, Agre insists that automatic face recognition is almost useless in identifying
terrorists in a crowd because it produces so many false positive matches. Philip E. Agre is
a professor of information studies at the University of California, Los Angeles.
As you read, consider the following questions:
When does the author consider the use of automatic face recognition
technology acceptable?
What does Agre argue will happen in countries where civil liberties hardly
exist if the use of face recognition technology becomes commonplace?
According to the author, what measures might have prevented September
11, 2001, terrorists from boarding a plane at the Boston airport?
Given a digital image of a person's face, face recognition software
matches it against a database of other images. If any of the stored
images matches closely enough, the system reports the sighting to its
owner. Research on automatic face recognition has been around for
decades, but accelerated in the 1990s. Now it is becoming practical,
and face recognition systems are being deployed on a large scale.
Some applications of automatic face recognition systems are
relatively unobjectionable. Many facilities have good reasons to
authenticate everyone who walks in the door, for example to regulate
access to weapons, money, criminal evidence, nuclear materials, or
biohazards. When a citizen has been arrested for probable cause, it is
reasonable for the police to use automatic face recognition to match a
mug shot of the individual against a database of mug shots of people
who have been arrested previously. These uses of the technology
should be publicly justified, and audits should ensure that the
technology is being used only for proper purposes.
Face recognition systems in public places, however, are a matter
for serious concern. The issue recently came to broad public attention
when it emerged that fans attending the Super Bowl [in 2001] had
unknowingly been matched against a database of alleged criminals, and
when the city of Tampa [Florida] deployed a face-recognition system in
the nightlife district of Ybor City. But current and proposed uses of face
recognition are much more widespread.... The time to consider the
acceptability of face recognition in public places is now, before the
practice becomes entrenched and people start getting hurt.
Legal Constraints Are Minimal
Nor is the problem limited to the scattered cases that have been
reported thus far. As the underlying information and communication
technologies (digital cameras, image databases, processing power, and
data communications) become radically cheaper over the next two
decades, face recognition will become dramatically cheaper as well,
even without assuming major advances in technologies such as image
processing that are specific to recognizing faces. Legal constraints on
the practice in the United States are minimal. (In Europe the data
protection laws will apply, providing at least some basic rights of notice
and correction.) Databases of identified facial images already exist in
large numbers (driver's license and employee ID records, for example),
and new facial-image databases will not be hard to construct, with or
without the knowledge or consent of the people whose faces are
captured. (The images need to be captured under controlled conditions,
but most citizens enter controlled, video-monitored spaces such as
shops and offices on a regular basis.) It is nearly certain, therefore, that
automatic face recognition will grow explosively and become pervasive
unless action is taken now.
I believe that automatic face recognition in public places, including
commercial spaces such as shopping malls that are open to the public,
should be outlawed. The dangers outweigh the benefits. The necessary
laws will not be passed, however, without overwhelming pressure of
public opinion and organizing. To that end, this article presents the
arguments against automatic face recognition in public places, followed
by responses to the most common arguments in favor.
Arguments Against Automatic Face
Recognition in Public Places
The potential for abuse is astronomical. Pervasive automatic face
recognition could be used to track individuals wherever they go.
Systems operated by different organizations could easily be networked
to cooperate in tracking an individual from place to place, whether they
know the person's identity or not, and they can share whatever identities
they do know. This tracking information could be used for many
purposes. At one end of the spectrum, the information could be leaked
to criminals who want to understand a prospective victim's travel
patterns. Information routinely leaks from databases of all sorts, and
there is no reason to believe that tracking databases will be any
different. But even more insidiously, tracking information can be used to
exert social control. Individuals will be less likely to contemplate public
activities that offend powerful interests if they know that their identity will
be captured and relayed to anyone that wants to know.
The information from face recognition systems is easily combined
with information from other technologies. Among the many "biometric"
identification technologies, face recognition requires the least
cooperation from the individual. Automatic fingerprint reading, by
contrast, requires an individual to press a finger against a machine. (It
will eventually be possible to identify people by the DNA-bearing cells
that they leave behind, but that technology is a long way from becoming
ubiquitous.) Organizations that have good reasons to identify individuals
should employ whatever technology has the least inherent potential for
abuse, yet very few identification technologies have more potential for
abuse than face recognition. Information from face recognition systems
is also easily combined with so-called location technologies such as E911 location tracking in cell phones, thus further adding to the danger of
abuse.
The technology is hardly foolproof. Among the potential
downsides are false positives; for example that so-and-so was "seen"
on a street frequented by drug dealers. Such a report will create "facts"
that the individual must explain away. Yet the conditions for image
capture and recognition in most public places are far from ideal.
Shadows, occlusions, reflections, and multiple uncontrolled light
sources all increase the risk of false positives. As the database of facial
images grows bigger, the chances of a false match to one of those
images grows proportionally larger.
Face recognition is nearly useless for the application that has
been most widely discussed since the September 11th [2001] attacks
on New York and Washington: identifying terrorists in a crowd. As Bruce
Schneier [cryptographer and biometrics expert] points out, the reasons
why are statistical. Let us assume, with extreme generosity, that a face
recognition system is 99.99 percent accurate. In other words, if a highquality photograph of your face is not in the "terrorist watch list"
database, then it is 99.99 percent likely that the software will not
produce a match when it scans your face in real life. Then let us say
that one airline passenger in ten million has their face in the database.
Now, 99.99 percent probably sounds good. It means one failure in
10,000. In scanning ten million passengers, however, one failure in
10,000 means 1000 failures—and only one correct match of a real
terrorist. In other words, 999 matches out of 1000 will be false, and each
of those false matches will cost time and effort that could have been
spent protecting security in other ways. Perhaps one would argue that
1000 false alarms are worth the benefits of one hijacking prevented.
Once the initial shock of the recent attacks wears off, however, the
enormous percentage of false matches will condition security workers to
assume that all positive matches are mistaken. The great cost of
implementing and maintaining the face recognition systems will have
gone to waste. The fact is, spotting terrorists in a crowd is a needle-in-a-
haystack problem, and automatic face recognition is not a needle-in-ahaystack-quality technology. Hijackings can be prevented in many ways,
and resources should be invested in the measures that are likely to
work.
The Public Is Poorly Informed
Many social institutions depend on the difficulty of putting names
to faces without human intervention. If people could be identified just
from looking in a shop window or eating in a restaurant, it would be a
tremendous change in our society's conception of the human person.
People would find strangers addressing them by name. Prospective
customers walking into a shop could find that their credit reports and
other relevant information had already been pulled up and displayed for
the sales staff before they even inquire about the goods. Even aside
from the privacy invasion that this represents, premature disclosure of
this sort of information could affect the customer's bargaining position.
The public is poorly informed about the capabilities of the
cameras that are already ubiquitous in many countries. They usually do
not realize, for example, what can be done with the infrared component
of the captured images. Even the phrase "face recognition" does not
convey how easily the system can extract facial expressions. It is not
just "identity" that can be captured, then, but data that reaches into the
person's psyche. Even if the public is adequately informed about the
capabilities of this year's cameras, software and data sharing can be
improved almost invisibly next year.
It is very hard to provide effective notice of the presence and
capabilities of cameras in most public places, much less obtain
meaningful consent. Travel through many public places, for example
government offices and centralized transportation facilities, is hardly a
matter of choice for any individual wishing to live in the modern world.
Even in the private sector, many retail industries (groceries, for
example) are highly concentrated, so that consumers have little choice
but to submit to the dominant company's surveillance practices.
If face recognition technologies are pioneered in countries where
civil liberties are relatively strong, it becomes more likely that they will
also be deployed in countries where civil liberties hardly exist. In twenty
years, at current rates of progress, it will be feasible for the Chinese
government to use face recognition to track the public movements of
everyone in the country.
Responses to Arguments in Favor of
Automatic Face Recognition in Public Places
The civilized world has been attacked by terrorists. We have to
defend ourselves. It's wartime, and we have to give up some civil
liberties in order to secure ourselves against the danger.
We must certainly improve our security in many areas. I have said
that myself for years. The fallacy here is in the automatic association
between security and restrictions on civil liberties. Security can be
improved in many ways that have no effect on civil liberties, for example
by rationalizing identification systems for airport employees or training
flight attendants in martial arts. Security can be improved in other ways
that greatly improve privacy, for example by preventing identity theft or
replacing Microsoft products with well-engineered software. And many
proposals for improved security have a minimal effect on privacy relative
to existing practices, for example searching passengers' luggage
properly. The "trade-off" between security and civil liberties, therefore, is
over-rated, and I am surprised by the speed with which many defenders
of freedom have given up any effort to defend the core value of our
society as a result of the terrorist attack.
Once we transcend automatic associations, we can think clearly
about the choices that face us. We should redesign our security
arrangements to protect both security and civil liberties. Among the
many security measures we might choose, it seems doubtful that we
would choose the ones that, like automatic face recognition in public
places, carry astronomical dangers for privacy. At least any argument
for such technologies requires a high standard of proof.
But the case for face recognition is straightforward. They were
looking for two of the terrorists and had photographs of them. Face
recognition systems in airports would have caught them.
I'm not sure we really know that the authorities had photographs
that were good enough for face recognition, even for those small
number of suspects that they claim to have placed on a terrorist watch
list. But even if we grant the premise, not much follows from it. First, the
fact that the authorities suspected only two of the nineteen hijackers
reminds us that automatic face recognition cannot recognize a face until
it is in the database. Most hijackers are not on lists of suspected
terrorists, and even if those particular hijackers had been prevented
from boarding their planes, seventeen others would have boarded.
Shoddy Security Procedures
More importantly, security procedures at the Boston airport and
elsewhere were so shoddy, on so many fronts, that a wide variety of
improvements would have prevented the hijackings. If you read the
white paper about the hijackings from the leading face-recognition
company, Visionics, it becomes clear that face recognition is really
being suggested to plug holes in identification systems. Terrorist watch
lists include the terrorists' names, and so automatic face recognition is
only necessary in those cases where the government possesses highquality facial photographs of terrorists but does not know their names
(not very common) or where the terrorists carry falsified identification
cards in names that the government does not know. In fact, some of the
terrorists in the recent attacks appear to have stolen identities from
innocent people. The best solution to this problem is to repair the
immensely destructive weaknesses in identification procedures, for
example at state DMV's, that have been widely publicized for at least
fifteen years. If these recent attacks do not motivate us to fix our identity
systems, then we are truly lost. But if we do fix them, then the role that
automatic face recognition actually plays in the context of other security
measures becomes quite marginal.
That said, from a civil liberties perspective we ought to distinguish
among different applications of face recognition. Those applications can
be arranged along a spectrum. At one end of the spectrum are
applications in public places, for example scanning crowds in shops or
on city streets. Those are the applications that I propose banning. At the
other end of the spectrum are applications that are strongly bounded by
legal due process, for example matching a mug shot of an arrested
person to a database of mug shots of people who have been arrested in
the past. When we consider any applications of automatic face
recognition, we ought to weigh the dangers to civil liberties against the
benefits. In the case of airport security, the proposed applications fall at
various points along the spectrum. Applications that scan crowds in an
airport terminal lie toward the "public" end of the spectrum; applications
that check the validity of a boarding passenger's photo-ID card by
comparing it with the photo that is associated with that card in a
database lies toward the "due process" end of the spectrum. The
dangers of face scanning in public places (e.g., the tracking of
potentially unbounded categories of individuals) may not apply to
applications at the "due process" end of the scale. It is important,
therefore, to evaluate proposed systems in their specifics, and not in
terms of abstract slogans about the need for security....
A Civil Liberties Threat
Your arguments are scare tactics. Rather than trying to scare people
with scenarios about slippery slopes, why don't you join in the
constructive work of figuring out how the systems can be used
responsibly?
The arguments in favor of automatic face recognition in public
places are "scare tactics" too, in that they appeal to our fear of terrorism.
But some fears are justified, and it is reasonable to talk about them.
Terrorism is a justifiable fear, and so is repression by a government that
is given too much power. History is replete with examples of both.
Plenty of precedents exist to suppose that automatic face recognition,
once implemented and institutionalized, will be applied to ever-broader
purposes. The concern about slippery slopes is not mere speculation,
but is based on the very real politics of all of the many issues to which
automatic face recognition could be applied. My argument here is
intended to contribute to the constructive work of deciding how
automatic face recognition can be responsibly used. It can be
responsibly used in contexts where the individuals involved have been
provided with due process protections, and it cannot be responsibly
used in public places. I fully recognize that literally banning automatic
face recognition in public places is a major step. The reason to ban it,
though, is simple: the civil liberties dangers associated with automatic
face recognition are virtually in a class by themselves.
Liberty is not absolute. It is reasonable for the government to curtail
liberty to a reasonable degree for the sake of the collective good.
Certainly so. The question is which curtailments of liberty provide
benefits that are worth the danger. The argument here is simply that
automatic face recognition in public places does not meet that test....
What do you have to hide?
This line is used against nearly every attempt to protect personal
privacy, and the response in each case is the same. People have lots of
valid reasons, personal safety for example, to prevent particular others
from knowing particular information about them. Democracy only works
if groups can organize and develop their political strategies in seclusion
from the government and from any established interests they might be
opposing. This includes, for example, the identities of people who might
travel through public places to gather for a private political meeting. In
its normal use, the question "What do you have to hide?" stigmatizes all
personal autonomy as anti-social. As such it is an authoritarian demand,
and has no place in a free society.
Agre, P. E. (2005). Using Face Recognition Technology Will Not Make
Americans Safer. In K. F. Balkin (Ed.), Opposing Viewpoints. The War on
Terrorism. San Diego: Greenhaven Press. (Reprinted from Your Face Is Not a
Bar Code: Arguments Against Automatic Face Recognition,
http://polaris.gseis.ucla.edu/pagre, 2003) Retrieved from
http://mylibrary.wilmu.edu/login?url=http://ic.galegroup.com/ic/ovic/ViewpointsDet
ailsPage/ViewpointsDetailsWindow?disableHighlighting=false&displayGroupNam
e=Viewpoints&currPage=&scanId=&query=&prodId=OVIC&search_within_result
s=&p=OVIC&mode=view&catId=&limiter=&displayquery=&displayGroups=&contentModules=&action=e&sortBy=&documentId=GA
LE%7CEJ3010345227&windowstate=normal&activityType=&failOverType=&com
mentary=&source=Bookmark&u=new90507&jsid=06125e218567f85da8df0c4f42
145168
Article 3
Face-Recognition Technology
Threatens Individual Privacy
By: (Stanley, 2003)
Jay Stanley is the privacy public education coordinator at the American Civil Liberties Union and
a former analyst at Forrester Research. Barry Steinhardt is associate director of the ACLU, chair
of the ACLU Cyberliberties Task Force, and cofounder of the Global Internet Liberty Campaign.
Since September 11, facial recognition systems—computer
programs that analyze images of human faces gathered by video
surveillance cameras—are being increasingly discussed and
occasionally deployed, largely as a means for combating terrorism.
They are being set up in several airports around the United States,
including Logan Airport in Boston, T.F. Green Airport in Providence,
Rhode Island, San Francisco International Airport, Fresno Airport in
California and Palm Beach International Airport in Florida. The
technology was also used at the 2001 Super Bowl, and plans are
underway to use it at the NFL championship again in 2002.
The technology is not just being used in places where terrorists
are likely to strike, however: in Tampa, Florida, it is also being aimed at
citizens on public streets. In the summer of 2001, the Tampa Police
Department installed several dozen cameras, assigned staff to monitor
them, and installed a face recognition application called Face-IT®
manufactured by the Visionics Corporation of New Jersey. On June 29,
2001, the department began scanning the faces of citizens as they
walked down Seventh Avenue in the Ybor City neighborhood.
Acting under a Florida open-records law, the ACLU was able to
obtain all existing police logs filled out by the operators of the city's face
recognition system in July and August, 2001. Those documents and
logs reveal several important things about the technology in one of its
first real-world trials:
- The system has never correctly identified a single face in
its database of suspects, let alone resulted in any arrests.
- The system was suspended on August 11, 2001, and has
not been in operation since.
- In the brief period before the department discontinued the
keeping of a log, the system made many false positives, including
such errors as confusing what were to a human easily identifiable
male and female images.
- The photographic database contains a broader selection
of the population than just criminals wanted by the police,
including such people as those who might have "valuable
intelligence" for the police or who have criminal records.
The Problems with Facial Recognition
Technology
Facial recognition systems are built on computer programs that
analyze images of human faces for the purpose of identifying them. The
programs take a facial image, measure characteristics such as the
distance between the eyes, the length of the nose, and the angle of the
jaw, and create a unique file called a "template." Using templates, the
software then compares that image with another image—such as a
photograph in a database of criminals—and produces a score that
measures how similar the images are to each other. The software
operator sets a threshold score above which the system sets off an
alarm for a possible match.
One potential problem with such a powerful surveillance system is
that experience tells us it will inevitably be abused. Video camera
systems are operated by humans, who after all bring to the job all their
existing prejudices and biases. In Great Britain, which has experimented
with the widespread installation of closed circuit video cameras in public
places, camera operators have been found to focus disproportionately
on people of color; and the mostly male (and probably bored) operators
frequently focus voyeuristically on women.
An investigation by the Detroit Free Press also shows the kind of
abuses that can take place when police are given unregulated access to
powerful surveillance tools. Examining how a database available to
Michigan law enforcement was used, the newspaper found that officers
had used it to help their friends or themselves stalk women, threaten
motorists, track estranged spouses—even to intimidate political
opponents. The unavoidable truth is that surveillance tools will inevitably
be abused.
Facial recognition is particularly subject to abuse because it can
be used in a passive way that doesn't require the knowledge, consent,
or participation of subjects. It is possible to put a camera up anywhere
and train it on people; modern cameras can easily view faces from over
100 yards away. People act differently when they are being watched,
and have the right to know if their movements and identities are being
captured. "I've seen it all," Tampa-police camera operator Raymond C.
Green told the St. Petersburg Times. "Some things are really funny, like
the way people dance when they think no one's looking. Others, you
wouldn't want to watch."
This technology has the potential to become an extremely
intrusive, privacy-invasive part of American life. History shows that once
installed, this kind of a surveillance system rarely remains confined to its
original purpose. Already, in the case of face recognition, it has spread
from purportedly looking for terrorists at the high-profile Super Bowl to
searching for petty criminals and runaways on the public streets of
Tampa.
Given the problematic social consequences of going down the
path of widespread deployment of facial recognition—enabled video
surveillance systems, proponents of the technology must at least
demonstrate that it will be highly effective in achieving the goal for which
it is being justified: combating terrorism and other crimes. However, all
prior indications have been that the technology is not effective and does
not work very well. Two separate government studies have found that it
performed poorly even under ideal conditions where subjects are staring
directly into the camera under bright lights. Several government
agencies have abandoned facial recognition systems after finding they
did not work as advertised, including the Immigration and Naturalization
Service, which experimented with using the technology to identify
people in cars at the Mexico-US border. And the well-known security
consultant Richard Smith, experimenting with FaceIT®—the same
package used by the Tampa police—found that it was easily tripped up
by changes in lighting, in the quality of the camera used, in the angle
from which a face was photographed, in facial expression, in the
composition of the background of a photograph, and by the donning of
sunglasses or even regular glasses.
The ACLU Investigation of Tampa's Facial
Recognition System
Because facial recognition is such a potentially powerful and
invasive surveillance tool and because the Tampa police department's
deployment represents one of the first real-world tests of facial
recognition technology, the ACLU was eager for details on the system
and how it was being used. On August 2, 2001, the ACLU of Florida
filed a request under Florida's open-records law (the state equivalent of
the federal Freedom of Information Act) for all documents pertaining to:
- the decision-making process by which Tampa elected to
deploy the system
- camera locations
- the technical capabilities of the system being used
- the procedures, instructions and training provided to
system operators
- the contents of the image databases
- written procedures for how the identification process is
handled
- future plans for these systems
A second request was submitted on October 19, 2001. A third
letter was sent to the department on November 27.
On December 4, the ACLU was furnished with copies of police
logs filled in by system operators between July 12 and August 11, 2001;
a Memorandum of Understanding between the software manufacturer,
Visionics, and the police department; the department's Standard
Operating Procedure governing the use of the system; and a training
video.
The Results: No Hits, No Arrests, Many False
Positives
The Tampa Police Department Face-IT® operator logs obtained
by the ACLU show that the system not only has not produced a single
arrest, but it also has not resulted in the correct identification of a single
person from the department's photo database on the sidewalks of
Tampa. Tampa police Detective Bill Todd not only confirmed these
results to the ACLU in phone conversations on December 17-18, but he
also acknowledged that the system has been out of operation since the
last log sheet was filled in on August 11.
The earliest logs provided by the department show activity for July
12, 13, 14, and 20, 2001. On those dates, the system operators logged
14 instances in which the system indicated a possible match. Of the 14
matches on those four days, all were false alarms. Two of the "matches"
were of the opposite sex from the person in the database, and three
others were ruled out by the monitoring officers due to differences in
age, weight or other characteristics that made the mismatch obvious to
a human observer. The rest of the false positives were simply notated
with a terse "not subject." These results are consistent with an
anecdotal report in the July 19, 2001 St. Petersburg Times that "the
alarm sounds an average of five times each night."
After July 20, 2001, the remaining logs provided by the
department are blank, and no logs were provided for dates later than
August 11. Based on conversations with representatives of the Tampa
Police, it appears that the blank logs are attributable to one of two
possibilities or a combination of those possibilities:
- Because of the high number of false positives, the
department changed the software's "threshold" setting that
determines how firm a match is required before an alarm is
sounded. That change resulted in far fewer false positives (but
would have also further reduced the chances that anyone in the
database who wandered in front of the department's cameras
would actually be identified by the software).
- Acting either on their own or at the direction of an internal
policy decision, the officers operating the system decided to
record only genuine matches, and not false positives. The log
sheets are blank because there were no genuine matches.
Because the system does not automatically scan the faces of
people on the sidewalks—operators must manually zoom in on a
citizen's face before it registers in the software—it would not be
surprising that system operators faced with an endless string of
negative results would spend less and less time and energy searching
out and capturing facial images, as opposed to simply watching the
video images for signs of trouble.
A reporter for the St. Petersburg Times reported on July 19 that
on the night he visited—at the apparent peak of the system's
operation—the operator captured 457 faces out of the estimated
125,000 people who visit Ybor City on a typical Friday. If that proportion
were to decline further, the already tiny chances for obtaining a genuine
match with a photo in the database would shrink even more.
Detective Todd explained the lack of any log sheets after August
11 by confirming that the Face-IT® system was taken out of service. (A
notation of "N/A" on the August 11 log sheet may have indicated that
the system was used only for a test or demonstration that day, he said.)
Todd explained the decision as a result of a police redistricting, which
necessitated training new officers in the system's operation. He said that
the department planned to resume use of the system at some point in
the future. However, it is reasonable to assume that the professionals in
the Tampa PD would not have let the system sit unused for so long
because of a mere redistricting process had they previously found facial
recognition to be a valuable tool in the effort to combat crime.
One Nation, Under Suspicion?
The department's written guidelines for "Utilization of Face-IT®
Software" reveal several other interesting points about Tampa's use of
face recognition. First, the guidelines state that photographs are entered
into the database if the subjects are wanted by the police; if "it is
determined that valuable intelligence can be gathered from contact" with
a person; or "based upon an individual's prior criminal activity and
record."
"Twenty percent of the criminals commit 80 percent of the crimes,"
the guidelines state. "It is the intention of the Tampa Police Department
to identify those subjects through the use of this software. Through this
proactive approach, the Tampa Police Department can deter criminal
activity prior to a criminal offense being committed."
Far from protecting citizens against the next terrorist strike or
other violent crimes, the department's guidelines thus make clear that
the system was used in an attempt to assist the full range of cases in
which local police are involved. Not just terrorists and violent criminals,
but anyone who might have "valuable intelligence" for the cop on the
beat, according to these guidelines, will have his or her photograph
entered into a police database so that they may set off an alarm
whenever they visit a public place that is within the lens of a department
camera.
The move to permanently brand some people as "under
suspicion" and monitor them as they move about in public places has
deep and significant social ramifications. If we are to take that path—a
step that the ACLU opposes—we should do so as a nation, consciously,
after full debate and discussion, and not simply slide down that road
through the incremental actions of local police departments.
Vast Potential for Abuse
The documentary record obtained by the ACLU of the Tampa
Police Department's experience with facial recognition technology adds
an important new piece of evidence that the technology does not deliver
security benefits sufficient to justify the Orwellian dangers that they
present. What the logs show—and fail to show—tells us that face
recognition software performs at least as badly in real-world conditions
as it has in the more controlled experiments that have been carried out.
The only possible justification for deploying such an ineffective
technology would be that it somehow deters crime because citizens
believe that it works. There are several problems with that argument.
First, it is premised on a Wizard of Oz—style strategy of hiding the truth
about facial recognition technology from the public—a stance that is not
compatible with the vital importance of public scrutiny of the tools,
technologies and techniques that police departments deploy.
Second, even if face recognition cameras did deter wanted criminals
from frequenting the areas under surveillance, all that would happen is
that the criminals would move to other locations. Indeed, sociological
studies of closed circuit television monitoring of public places in
Britain—where residents are widely aware of the cameras—have shown
that it has not succeeded in reducing crime.
Given the system's poor performance in Tampa—which the police
department there has implicitly recognized in their decision to stop
actively using it—the ACLU hopes that police departments around the
nation will step back, objectively examine the costs and benefits of the
system, and reject them as ineffective. Other cities have voted to deploy
these systems, including Virginia Beach, Palm Springs and Boulder City,
Nevada. We ask those cities to consider the documentary evidence
from Tampa and not waste precious resources on this illusory path
toward public safety.
The worst-case scenario would be if police continue to utilize
facial recognition systems despite their ineffectiveness because they
become invested in them, attached to government or industry grants
that support them, or begin to discover additional, even more frightening
uses for the technology. The continued use of facial recognition
technology under these circumstances would divert limited law
enforcement resources from more productive pursuits, create a
dangerous false sense of security, and ultimately threaten the privacy,
freedom and safety of everyone in America.
Stanley, J., & Steinhardt, B. (2003). Face-Recognition Technology Threatens
Individual Privacy. In J. D. Torr (Ed.), Current Controversies. Civil Liberties. San
Diego: Greenhaven Press. (Reprinted from Drawing a Blank: The Failure of
Facial Recognition Technology in Tampa, Florida, www.aclu.org, 2002) Retrieved
from
http://mylibrary.wilmu.edu/login?url=http://ic.galegroup.com/ic/ovic/ViewpointsDet
ailsPage/ViewpointsDetailsWindow?disableHighlighting=false&displayGroupNam
e=Viewpoints&currPage=&scanId=&query=&prodId=OVIC&search_within_result
s=&p=OVIC&mode=view&catId=&limiter=&displayquery=&displayGroups=&contentModules=&action=e&sortBy=&documentId=GA
LE%7CEJ3010295224&windowstate=normal&activityType=&failOverType=&com
mentary=&source=Bookmark&u=new90507&jsid=8edeeed89f15cf44bee30b533d
118768
Pro Articles: Facial Recognition Technology is Beneficial
Article 1
Facial Recognition Technology Can
Enhance Homeland Security
By: (Atick, 2004)
In October 2001, when Joseph Atick issued the following remarks, he was chief
executive officer of the Visionics Corporation, a leader in facial recognition
technology. In 2002 Visionics merged with Identix, an industry leader in fingerprint
identification technologies, and Atick now serves as Identix's president and CEO.
Facial recognition technology combines video cameras with special software in
order to automatically scan crowds for wanted individuals. The video camera captures
images of individuals' faces, which are automatically compared against a database of
suspected terrorists. Relevant authorities are alerted if a match is found. Such a system
does not permanently record the faces of those it scans, so the technology cannot be used
to track the movement of ordinary citizens. Used at airports and other areas where
security is a major concern, facial recognition technology has the potential to enhance
security without resorting to inconvenient and invasive methods such as ID or fingerprint
checks.
I'd like to start by iterating what I see is the cornerstone of our defense the
civilized world against crime and terrorism in this new era. I believe it's going to be our
ability, in the context of a free society, to identify those who pose a threat to public safety
and to prevent their actions.
Essential to the success of this defense strategy are intelligence data and
identification technology, such as facial biometrics. The fact is terrorism and terrorists do
not emerge overnight. They require indoctrination. They require constant reinforcement
over an extended period of time. This affords intelligence agencies opportunities to
establish their identities and to build watch lists. Ultimately terror is not faceless....
[Terrorists] are at large, committing a crime, whether it's murder or [some]
terrorist activity, and there's today no technology, apart from what I'm talking about in
terms of biometrics, that can stop these individuals from entering airports and facilities
and conducting their activities.
According to published news reports, two of the terrorists of the September 11
[2001, terrorist attacks on America] were already on a watch list, sought by the FBI ... but
we had no facial recognition mechanism to stop them from entering the airport. A third
was already known to the French authorities. I suspect when the dust settles, we'll find
out that several others were already known to the Germans, Belgium, French, British, and
Israeli intelligence organizations that have been collecting data about terror.
The technological challenge
While there is no guarantee that all terrorists will be known in advance, at the
very least we have the responsibility to try to prevent the actions of the thousands already
known, just like these. Given a watch list, ... the question becomes: does the technology
exist that can spot the individuals as they enter a country or attempt to board a plane?
The demands on such a technology are very high. It has to be able to do three
things. One is scale, in that it should work across many security checkpoints at hundreds
of airports and borders and not at just one location. It has to work as part of a network of
cameras—it's not enough to just plug a hole in one door and leave every other door open.
You have to be able to scale the application.
Second, just to give you the scale of the challenge technologically involved, you
have to be able to sift through about six hundred million faces alone in the United States
as they board planes, as they enter into security checkpoints each year, and spot the
terrorists and criminals among them without interfering with the passenger flow. We do
not want to create Draconian methods and barricades. The public will not accept that, nor
will the airlines, nor will the airport authorities. We have to maintain throughput.
Third, we have to function without infringing on the rights or inconveniencing the
honest majority. We have to deliver a solution to a problem but without giving up
something we have cherished so much, which is our privacy.
I believe there is good news here, which is that there is a technology. It is
computerized facial recognition and facial scanning, such as the FaceIt® face recognition
technology, which I can speak about because I'm not only the CEO of Visionics, the
company that has commercially developed the technology, but I'm one of its main
inventors. I've spent the last 14 years working on facial recognition and identification
technologies, starting with my days in academia. I used to be the head of two laboratories
where the human brain was studied to try to explore how we solve this problem.
The technology works as follows (it's very simple): you have a standard camera—
it could be any video camera. It connects to what's called a FaceIt appliance, a small box
where facial recognition runs. This technology captures each of the faces it sees in front
of it, locates them in a crowd. It analyzes the face and creates a mathematical code, a
digital code called a faceprint, which is essentially a description of the relationships
between the landmarks of your face. It's some analytical measurement of the skull as it
protrudes through the skin of your face. So it's some number, some mathematical
relationship that's called a faceprint. It's only about 84 bytes of data, less than two
sentences in an e-mail you send to a friend—that's what captures your identity.
Now this faceprint is encoded, encrypted, and can be sent by a network
connection to a database where a watch list exists, a most-wanted database, for matching.
The faceprint is a code that only a computer can interpret. It's encrypted; it cannot be
used to reconstitute the image of a face. Given the faceprint, you cannot see what the face
looks like. It's unique to a given face, and it does not change with age, lighting, or
viewing conditions. It ignores facial hair and other superficial changes of the face. In a
sense, it's a fingerprint in your face.
Let's look at it at a system level. These cameras can exist at the security
checkpoints as people are walking through them. This is a controlled environment, so you
can control the lighting as people walk through it. The camera automatically captures the
face, and through the appliance, encodes it into a faceprint, and through the network
sends it to a matcher that compares the faceprints against the watch list of the most
wanted. It could be in Washington; it could be in the airport.
An alarm system, not a video recording system
If a match is successful and beyond a certain level of confidence, then it sends a
message to an alarm system. The system is similar to burglar or fire alarms. They are
monitored by a central agency, which says, "there's an alarm that happened, let me
check." Video will not be shipped to that location. [Instead,] at the point when the alarm
happens, an image of the person going through the security checkpoint and an image
from the database appear on the screen in front of the person in the control monitoring the
alarm. If the person believes that that is a true match, then they can signal back via a
wireless connection to the airport or back via whatever mechanism is appropriate to the
security guard at the gate and ask them to intercept and interview that passenger.
I want to emphasize if there is no match, then there is no memory—the image is
dropped. This is not a recording system. It does not record any video, nor will you see
any video from the other side. All that is shipped by the network is the 84 bytes of data.
The system does not record, store, or alter the watch-list database in any way. The watchlist database cannot be hacked into, and because it only accepts faceprint queries; it
doesn't take any delete or add or change.
Over the years, we have seen successive technologies adopted to enhance
security. Today at security checkpoints like these, X-ray luggage scanners, metal
detectors, chemical trace detectors are deployed to check for concealed weapons and
explosives on our body or in our carry-on luggage. I see facial scanning and matching
against the watch list as an integral component in tomorrow's airport security systems. I
believe it's time to ensure that airports are no longer safe havens for criminals and
terrorists. The American public agrees. In a recent Harris Poll conducted after September
11, 86 percent of the public endorsed the use of facial recognition to spot terrorists.
Still, there have been some criticisms of this technology. I would like to quickly
address those. On the issue of privacy, it's important to emphasize that FaceIt surveillance
system is not a national ID. It does not identify you or me. It is simply an alarm system
that alerts when a terrorist on a watch list passes through a metal detector at the airport. If
there is no match, I repeat, there is no memory.
Furthermore, such a system delivers security in a nondiscriminatory fashion. This
is very important. FaceIt technology is based on analytical measurements that are
independent of race, ethnic origination, or religion. It is free of human prejudices or
profiling. It does not care where you come from and what your skin color looks like. We
have gone further, actually, and have called for congressional oversight and federal
legislation to ensure the watch lists contain only individuals who threaten public safety
and to penalize for misuse of such technology down the line. I believe Congress will take
action in due time, but at the moment their priorities are the real and present danger of
terrorism and not on the theoretical potential of misuse down the line.
The technology will work
Another objection concerns the effectiveness of the technology. Actually the same
people who raise the objection about privacy have pointed out and raised the same
objection about the ineffectiveness of the technology. Some have used old data, for
example, going back to a 1996 INS [Immigration and Naturalization Service] study. I'll
give you the facts about that study. INS began in 1996 to try a mechanism to allow
people to expedite their passage through the border. That was a very ambitious program
early on in 1996. However, through reorganization of INS, the control of the border for
vehicles was assigned to the Department of Transportation. As the Department of
Transportation had no experts in biometrics, they suspended the project without any data
being collected, without any results being analyzed. This is 1996; the world was so far
different than we are today.
They also used data out of context, such as a Defense Department study, which, in
fact, was a comparative analysis to check which algorithms are worthy of adoption for
the embassy security project. In fact, DARPA [Defense Advanced Research Projects
Agency] ended up recommending to Congress a $50 million four-year project called
Human ID at a Distance, to adopt facial recognition for needs in embassies outside the
United States.
So a lot of talk about using this type of data has been out of context and old and
has said, without explaining it, that the technology does not work. I have two responses to
that. First, technology is constantly evolving and advancing. Anybody who is in the
science and technology business knows that the state of the art today is a quantum leap of
where it was even a year ago, let alone five years ago. And of course, with the accelerated
R&D initiatives under way around the world with university people as well as industrial
people working together, the technology will rapidly become even more reliable and
robust. It's a matter of time, whether it's this year, next year, the year after, it will be
there.
FaceIt has already been used in many real-world environments and has produced
significant results; the Mexican election system, police mug shot systems [in] many
places around the world, criminal alarm systems in London, Birmingham, Iceland,
International Airport Tampa, and so on.
But this is not my main point on this issue. My main point is this: facial scanning
at airports is a tool, just like metal detectors and luggage scanners. They enhance security
without having to be technologically perfect. A facial scanning system at the security
checkpoint will alert the security guard to investigate, just like they do today when the
metal detector beeps. Such a system will deter terrorists from boarding planes, just like
metal detectors deter them from taking weapons on board, even though we all know
metal detectors or luggage scanners are nowhere near a hundred percent accuracy. So if
you say that facial recognition is not a hundred percent, well then, let's go ahead and take
out all metal detectors and all luggage scanners, and let's see what happens to airport
security.
We owe it to the traveling public to do everything in our capacity to ensure their
safety. We have the technology today, as a nation, to peacefully and responsibly make a
difference in the war against terror and to restore the public's trust in the travel process,
without a cost, in my opinion, to the privacy of the honest majority. I see no legitimate
objection why we should not do it.
Atick, J. (2004). Facial Recognition Technology Can Enhance Homeland Security. In J. D. Torr (Ed.), At
Issue. Homeland Security. San Diego: Greenhaven Press. (Reprinted from Surveillance Technology:
Tracking Terrorists and Protecting Public Places, Spectrum Online, 2001) Retrieved from
http://mylibrary.wilmu.edu/login?url=http://ic.galegroup.com/ic/ovic/ViewpointsDetailsPage/ViewpointsD
etailsWindow?disableHighlighting=false&displayGroupName=Viewpoints&currPage=&scanId=&query=
&prodId=OVIC&search_within_results=&p=OVIC&mode=view&catId=&limiter=&displayquery=&displayGroups=&contentModules=&action=e&sortBy=&documentId=GALE%7CEJ3010265210
&windowstate=normal&activityType=&failOverType=&commentary=&source=Bookmark&u=new90507
&jsid=b0f40786ed8f2d8b50cd6a3e438251b5
Article 2
Using Face Recognition Technology
Will Make Americans Safer
By: (Woodward, 2005)
Face recognition technology—a form of biometrics—compares the digital image of an individual's
face to a computerized database and provides instantaneous identification. In the following
viewpoint, John D. Woodward Jr. argues that using face recognition to control access to sensitive
facilities at airports, prevent identity theft and fraudulent use of travel documents, and identify
known or suspected terrorists can help safeguard America from terrorist attacks. He maintains
further that as face recognition technology improves, it will provide even greater protection. John D.
Woodward Jr., a former CIA operations officer, is a senior policy analyst at RAND, a nonprofit
policy analysis and research institution.
As the nation recovers from the [terrorist] attacks of September 11, 2001, we must
rededicate our efforts to prevent any such terrorist acts in the future. Although terrorism
can never be completely eliminated, we, as a nation, can take additional steps to counter
it. We must explore many options in this endeavor. Among them, we should examine the
use of emerging biometric technologies that can help improve public safety. While there
is no easy, fool-proof technical fix to counter terrorism, the use of biometric technologies
might help make America a safer place.
"Biometrics" refers to the use of a person's physical characteristics or personal
traits to identify, or verify the claimed identity of, that individual. Fingerprints, faces,
voices, and handwritten signatures are all examples of characteristics that have been used
to identify us in this way. Biometric-based systems provide automatic, nearly
instantaneous identification of a person by converting the biometric—a fingerprint, for
example—into digital form and then comparing it against a computerized database. In
this way, fingerprints, faces, voices, iris and retinal images of the eye, hand geometry,
and signature dynamics can now be used to identify us, or to authenticate our claimed
identity, quickly and accurately. These biometric technologies may seem exotic, but their
use is becoming increasingly common. In January 2000, MIT Technology Review named
biometrics as one of the "top ten emerging technologies that will change the world." And
after September 11th, biometric technologies may prove to be one of the emerging
technologies that will help safeguard the nation....
Controlling Access
Access control to sensitive facilities can be improved by using biometric-based
identifiers. In other words, instead of identifying an individual based on something he has
(a badge), or something he knows (a password or a PIN), that person will be identified
based on something he is. For example, instead of flashing a badge, an airline worker
with a need to access sensitive areas of airports could be required to present a biometric,
say his iris, to a sensor. From a foot away and in a matter of seconds, this device captures
the person's iris image, converts it to a template, or computer-readable representation, and
searches a database containing the templates of authorized personnel for a match. A
match confirms that the person seeking access to a particular area is in fact authorized to
do so. This scenario is not science fiction. Such a system has been used at CharlotteDouglas International Airport in North Carolina.
While not foolproof, such a biometric system is much harder to compromise than
systems using a badge or badge plus PIN. As such, a biometric system to authenticate the
identity of individuals seeking access to sensitive areas within airports or similar facilities
represents a significant increase in security. And to the extent that terrorist acts can be
thwarted by the ability to keep unauthorized individuals out of these sensitive areas, this
improvement in physical security could contribute directly to a decrease in the terrorist
threat.
Preventing Immigration Fraud/Identity Theft
In addition to failures to authenticate the identity of airport employees, failures to
accurately identify individuals as they cross through our borders can also contribute to a
terrorist attack. It is important to ensure that necessary travel documents are used only by
the person to whom they were issued. Like badges and tokens, passports, visas, and
boarding passes can be forged, misplaced, or stolen. While anti-fraud measures are bulk
into the issuance of such documents, there is room for improvement. A biometric
template of, for example, one's fingerprint (or other biometric) could be attached to the
document on a bar code, chip, or magnetic strip, making it more difficult for someone to
adopt a false identity or forge a travel document. To ensure security, the biometric should
be encrypted and inserted into the document by a digital signature process using a trusted
agent, such as a U.S. embassy's visa section.
In addition to helping prevent fraud or identity theft, we can use biometrics to
make it easier for certain qualified travelers to identify themselves. For example, the
Immigration and Naturalization Service (INS) currently uses biometrics in the
Immigration and Naturalization Service Passenger Accelerated Service System
(INSPASS). Under INSPASS, over 45,000 international travelers, whose identities and
travel papers have been vetted, have voluntarily enrolled in a system that verifies their
identity at ports of entry using the biometric of hand geometry. By allowing these
frequent travelers to pass through immigration quickly, INSPASS enables INS officers to
devote more time and attention to problem cases.
Identifying Known or Suspected Terrorists
As the criminal investigation of the September 11th attacks appears to
demonstrate, some of the terrorists were able to enter the United States using valid travel
documents under their true identities, passing with little difficulty through immigration
procedures at U.S. ports of entry. Once in the country, they patiently continued their
planning, preparation, training, and related operational work for months and in some
cases years until that fateful day. Once inside the United States, the terrorists cleverly
took advantage of American freedoms to help carry out their attacks.
According to media reports, however, at least three of the suicide attackers were
known to U.S. authorities as suspected terrorists. In late August 2001, the Central
Intelligence Agency (CIA) passed information to the INS to be on the lookout for two
men suspected of involvement in terrorist activities. The CIA apparently obtained
videotape showing the men, Khalid Almihdhar and Nawaf Alhazmi, talking to people
implicated in the U.S.S. Cole bombing. The videotape was taken in Kuala Lumpur,
Malaysia, in January 2000. It is not clear when the CIA received it.
When the INS checked its database, it found that a Almihdhar and Alhazmi had
successfully passed through INS procedures and had already entered the United States.
The CIA asked the Federal Bureau of Investigation (FBI) to find them. But with both
men already in the United States, the FBI was looking for two needles in a haystack. The
FBI was still seeking the two when the hijackers struck. Khalid Almihdhar and Nawaf
Alhazmi are believed to have been hijackers on American Airlines flight 77, which
crashed into the Pentagon.
As the above details illustrate, we need a better way to identify individuals whom
we know or suspect to be terrorists when they attempt to enter the United States. The use
of biometric facial recognition is one way to make such identifications, particularly when
U.S. authorities already have a photograph of the suspected terrorist whom they seek.
FaceCheck
Biometric facial recognition systems could be immediately deployed to help
thwart future terrorist acts. Such a "FaceCheck" system, the term I use for the specific
counterterrorism application discussed in this paper, can be done in a way that uses
public safety resources effectively and efficiently and minimizes inconvenience and
intrusiveness for the average traveler.
In general, facial recognition systems use a camera to capture an image of a
person's face as a digital photograph. In the most common form of facial recognition, this
image is manipulated and reduced to a series of numbers that represent the image in
relation to the "average" face. These numbers are often referred to as a template, which is
then instantly searched against a "watchlist," or computerized database of suspected
terrorists' templates. This search seeks to answer the question, "Is this person in the
watchlist database?" A computer-generated match or "hit" alerts the authorities to the
presence of a potential threat. The value of such a system in helping to prevent
individuals such as Khalid Almihdhar and Nawaf Alhazmi from entering the country is
clear. Indeed, according to the Washington Post, a government committee appointed by
Secretary of Transportation Norman Y. Mineta to review airport security measures will
recommend that facial recognition systems be deployed in specified airports to improve
security.
Operational Framework
Controlling access to sensitive facilities, as well as preventing immigration fraud
and identity theft, can be accomplished with a variety of biometric systems. Such systems
can accommodate users and are relatively easy to incorporate into current security
systems (i.e., adding a digitally signed, encrypted biometric bar code to existing travel
documents or badges). Moreover, the technology is readily available.
Identifying known or suspected terrorists presents a greater challenge. While
fingerprint and other biometric systems could be used to identify these individuals,
government authorities might find it difficult to collect the fingerprints or iris scans of
suspected terrorists in order to build the database against which to compare an unknown
individual. Facial recognition biometric systems, however, offer a way around this
problem. Specifically, facial recognition systems will allow the identification of a
suspected or known terrorist even if the only identifying information we have is a
photograph....
How FaceCheck Works
Although facial recognition is not a perfect technology, we should not let the
perfect become the enemy of the good. The overall challenge is to make it better.
Fortunately, gifted scientists and engineers are working on this challenge, and in light of
the September 11th attacks, the government is likely to make additional resources
available to encourage research, development, testing, and evaluation. In the meantime,
we can use facial recognition operationally in a way that minimizes its weaknesses. The
system works best when environmental factors such as camera angle, lighting, and facial
expression are controlled to the maximum extent possible. We must apply this lesson to
our operational framework.
If a person (including a terrorist) is coming to the United States from overseas, he
must pass through an immigration checkpoint at the port of entry. At this checkpoint, the
INS official scrutinizes the person, asks questions, and inspects the person's travel
documents. The official then makes a decision as to whether the person gets into the box,
i.e., enters the United States. This immigration checkpoint is one of the nation's vital first
lines of defense against terrorist entry. From the perspective of counterterrorism, this
checkpoint is a chokepoint where the would-be terrorist is at his most vulnerable. This is
the first and probably only place in the United States where he will be closely scrutinized
by trained federal officials. Here is how FaceCheck can make the checkpoint a more
formidable bastion.
An individual processing through an immigration checkpoint at a port of entry
should be subject to a FaceCheck whereby he would be required as part of immigration
processing to pose for a photograph under completely controlled conditions. This way we
minimize facial recognition's technological imperfections, which derive in large measure
from attempting to use the system to find a face in a crowd. The photograph would then
be processed by the facial recognition system and run against a watchlist database of
suspect terrorists. If the system indicates a match, this result would be confirmed by
visual inspection by the authorities, and the person could be taken to a secondary
interview for heightened scrutiny.
The Strategic Use of FaceCheck
Facial recognition systems do not necessarily have to be implemented to process
every individual seeking to enter the United States. Rather, the authorities should use
FaceCheck in a more strategic way. This would include using it randomly; in targeted
ways; and in conjunction with other information. For example, FaceCheck could be run
on every so many people from a given flight. It could be used at different ports of entry at
different times and for different flights. Similarly, FaceCheck teams could deploy to
specific ports of entry at specific times to target a specific flight in light of threat
information. Testers—human guinea pigs whose images have been entered into the
watchlist database—should be included in the immigration processing to rigorously
evaluate the system: How well did FaceCheck do in identifying suspects?
Moreover, while we do not have to use the system on all passengers entering the
United States, we should consider setting up FaceCheck stations at ports of entry and
have passengers pose for photographs as though the system were in continuous use. In
this way, we keep terrorists guessing as to where the systems are actually deployed or in
use. We should also experiment with FaceCheck systems using closed-circuit
surveillance cameras to capture images clandestinely at certain ports of entry. In this way,
we can learn how well such systems work in realistic operational environments and gain
information to improve their technical capabilities. Again, we do not need to inform
passengers as to where such systems are actually deployed.
We also need to consider using FaceCheck for visa processing at our embassies
and consulates overseas. We could easily require a visa applicant to submit to a
photograph taken under controlled conditions. We could then run a search against the
watchlist database. Similarly, we do not need to inform visa applicants overseas whether
we are actually running FaceCheck.
Dedicated, highly trained terrorists may be able to defeat facial recognition
systems. One technique may be for a terrorist to undergo cosmetic surgery to alter his
facial features. As a result, he will not match his database photograph. Similarly,
terrorists may try to enter the United States illegally by crossing the relatively porous
borders with Canada and Mexico. But although facial recognition systems might be
defeated by a surgeon's skill or an illegal border crossing, at least we force terrorists to
take additional steps that drain their resources and keep them on the defensive.
Security vs. Civil Liberties
Though these facial recognition systems are not technically perfected, they are
improving. There is little reason to doubt that as the technology improves, it will
eventually be able to identify faces in a crowd as effectively as it currently identifies a
face scanned under controlled circumstances. And while civil libertarians might decry the
use of this technology as an invasion of privacy, the key lies in balancing the need for
security with the need to protect civil liberties. In this regard, three brief points need to be
made.
First, we do not have a constitutional right to privacy in the face we show in
public. The United States Supreme Court has determined that government action
constitutes a "search" when it invades a person's reasonable expectation of privacy. But
the Court has found that a person does not have a reasonable expectation of privacy in
those physical characteristics that are constantly exposed to the public, such as one's
facial features, voice, and handwriting. Therefore, although the Fourth Amendment
requires that a search conducted by government actors be "reasonable," which generally
means that the individual is under suspicion, the use of facial recognition does not
constitute a search. As a result, the government is not constrained, on Fourth Amendment
grounds, from employing facial recognition systems in public spaces. Although the use of
facial recognition may generate discussion of the desirability of enacting new regulations
for the use of the technology, such use is allowed under our current legal framework.
Secondly, current legal standards recognize that we are all subject to heightened
scrutiny at our borders and ports of entry. The "border exception" to the Fourth
Amendment recognizes "the longstanding right of the sovereign to protect itself by
stopping and examining persons and property crossing into this country." Accordingly,
such searches are reasonable and do not require a warrant, probable cause, or even
reasonable suspicion. When we transit our borders, therefore, the authorities can closely
scrutinize our person and property in ways that they could not do in another setting. Even
within our own borders, the law requires airport facilities to conduct security screening of
passengers' persons and personal effects, and it is unlawful even to make jokes about
threats on airport property.
Finally, it is worth noting that facial recognition systems are not relied upon to
make final determinations of a person's identity. Rather, the system alerts the authorities
so that additional screening and investigation can take place. And though the system will
make false matches that will subject innocent passengers to additional questioning and
scrutiny, the current system routinely does the same....
Biometrics Can Make America Safer
There is no high-tech silver bullet to solve the problem of terrorism. And it is
doubtful that facial recognition or other biometric technologies could have prevented the
terrorist attacks on September 11th. But to the extent we can improve access control at
sensitive facilities such as airports, reduce identity theft and immigration fraud, and
identify known or suspected terrorists, then we make terrorism more difficult in the
future. Biometrics is one technology that can help us achieve the goal of a safer America.
Woodward, J. D. (2005). Using Face Recognition Technology Will Make Americans Safer. In K. F. Balkin
(Ed.), Opposing Viewpoints. The War on Terrorism. San Diego: Greenhaven Press. (Reprinted from
Biometrics: Facing Up to Terrorism, www.rand.org, 2001) Retrieved from
http://mylibrary.wilmu.edu/login?url=http://ic.galegroup.com/ic/ovic/ViewpointsDetailsPage/ViewpointsD
etailsWindow?disableHighlighting=false&displayGroupName=Viewpoints&currPage=&scanId=&query=
&prodId=OVIC&search_within_results=&p=OVIC&mode=view&catId=&limiter=&displayquery=&displayGroups=&contentModules=&action=e&sortBy=&documentId=GALE%7CEJ3010345226
&windowstate=normal&activityType=&failOverType=&commentary=&source=Bookmark&u=new90507
&jsid=9b895f655aae8630aa5535d5baf865c5
Article 3
Physical Characteristic Recognition
Technology Can Be Used to Preserve
Privacy Rights
By: (Singleton, 2006)
Solveig Singleton is a lawyer and senior analyst with the Competitive Enterprise Institute's Project
on Technology and Innovation. She specializes in the analysis of privacy, electronic commerce, and
telecommunications.
Americans are accustomed to presenting driver's licenses or other forms of identification when
conducting business transactions or entering secured areas such as airports. Most forms of ID,
however, are easily counterfeited, and terrorists and criminals have easy access to these fraudulent
documents. This problem is eliminated by biometric technology that creates digital pictures of facial
features, fingerprints, and other unique physical characteristics. While civil libertarians fear a world
where every person's digital identifiers are in the hands of authorities, biometrics can play an
important part in the fight against crime and terrorism. Biometrics can also help protect privacy.
For example, access to personal computers or financial records could be restricted only to those who
have the proper digital identifiers. Like any other system of identification, biometrics could be used
to violate privacy rights. It is not the technology, however, that needs oversight, but the government
departments that use it.
The two dark-skinned young men, unshaven and heavily muscled, looked
ominously foreign. No doubt more than one airline passenger breathed deeper in relief
when security guards at the Roanoke, Va., airport pulled the men out of line to search
their luggage and pat them down—once in the ticket line, again at the security gate and a
third time before they boarded the plane. Three "random" searches to take a 20-minute
flight.
Facial-recognition technology tied to a database of suspected terrorists, though,
would have left the young men alone. My black-haired fiancé and his brother are no
threat. Their frightening musculature is cheerfully employed shifting furniture for their
mom; their closest approach to battle is the world of online computer games. Yet the
human element in our security forces instinctively will bristle at their approach until the
United States is attacked by blond, blue-eyed Nordic terrorists, activists for reindeer
rights or some myth of Aryan superiority.
Biometrics are getting a bad rap. Fingerprinting bears the stigma of its association
with police procedure. DNA databases bring to mind horrific theories of genetic or racial
purity. Facial-recognition cameras call up images of George Orwell's 1984 and
omnipresent video surveillance. But biometrics, like any technology, is morally neutral.
Any abuses will stem from the human element in our government. And biometric systems
could help to control, counter and check those error-prone human elements.
Strictly speaking, what is a biometric system? A biometric system used personal
traits or physical characteristics to recognize an individual. The signature on the back of
our credit cards is a very primitive biometric; so is any photo ID or mug shot. The human
optic nerve is hooked to our brain's biological facial-recognition database. Bloodhounds
track trails of unique individual scents.
Examples of more-advanced biometric systems in use include a facial-recognition
system used by the West Virginia Department of Motor Vehicles to scan applicants for
duplicate or fraudulent driver's licenses. The state of Georgia now includes a digital
thumbprint on its licenses. Typing and mouse-use patterns also can be used to identify
individuals, existing technology likely to be deployed online. Predictive Networks, a
Cambridge, Mass., company, has developed software to do just that. High-tech spy
thrillers on television and in the movies have acquainted us with the retinal scans, voice
prints and hand-geometry scanners just beginning to be deployed. The gambling industry
is considering the use of voice-recognition technology to control access to telephone
gambling networks, for example. Less-familiar biometric systems include earlobe
analysis and body-odor sniffers. But widespread deployment of biometric systems still is
part of sci-fi future.
In that future, the trends suggest that biometrics will be a boon to privacy and
security in the private sector. In the works is voice-print technology that will recognize
only authorized users of long-distance telephone services or brokerage accounts, keeping
out snoops. Handprints and iris scans can make it harder for hackers to fool computer
networks, expanding the realm of possibility for authorized computer users safely to
access sensitive medical records or other data remotely. Thieves of portable items such as
cell phones, laptops, cars and credit cards will find their booty useless without the rightful
owner's fingerprints to activate them. Most people have trouble remembering the long
combinations of random letters and numbers needed for a really secure password. the
digital record of one's fingerprints, though, can be scrambled into a unique personalidentification number to foil identity thieves. As the cost of this technology comes down
and its accuracy is improved, widespread deployment in the private sector is almost a
given wherever current identification systems lag behind security needs.
Safeguards for Citizens
What of the use of biometrics in and by government? Some civil libertarians fear
a controlled government database chock-full of biometric data and a nationwide system
of scanners and controls from which there is no escape. Religious, political or racial
minorities could be hunted down. Rogue police could harass innocents that unwittingly
have offended them.
Will biometrics facilitate human-rights violations on a trivial or massive scale?
The short answer is it could do either, but the risk is no greater than for any other modern
identification technology. And it can be controlled. The choice we have is not between
zero-risk and risky identification systems. It is a choice between the current systems,
which do not prevent government abuses and yet are fraught with security holes and other
problems, or more effective modern systems no more liable to abuse than any other.
The present reality is that the current system of identification, based on the Social
Security number, signature and driver's license, has failed. In a world of open public
records and long-distance financial transactions over electronic networks, the Social
Security number cannot continue to function as a password. The driver's license cannot
be displayed as proof of age or identity over a network. Most importantly for the
evolution of systems of identification, the current system has failed to provide the degree
of protection against fraud that consumers would like to have. It is proving inadequate for
legitimate law-enforcement purposes as well, especially as criminals have increased
mobility across jurisdictions. These legitimate purposes of law enforcement include
everyday protection against ordinary criminals as well as rarer terrorist events.
One way or another, current methods of authentication must be replaced or
augmented—perhaps with digital signatures, perhaps with better biometrics (the photo ID
and signature already are biometrics of a weak, error-prone sort) or perhaps with some
combination.
Any system of information collection is subject to abuse. Data collected by the
national census can be abused, and was when data was used during World War II to
relocate Japanese-Americans. Wiretapping has been abused. Even the technology built
into cell phones to help authorities pinpoint the locations of 911 callers could be used for
nefarious purposes by an evil regime to track innocent people.
The dangers and history of government abuses are real. But at the same time they
are highly speculative. Given the reality of abuses and their relative rarity in the modern
U.S. context, where do we draw the line? The risk that imperfect systems of identification
will provide opportunities for fraud, terrorism and other crimes also is real. And these
acts, too, violate our rights to security of life and limb as well as property rights. Do we
know that the benefits of "leaky" systems in allowing dissidents additional leeway along
with criminals will outweigh the costs? The answer probably is different at different
times and places throughout history. We only can make a best guess.
Do we say, as our rule of thumb, that the government may not collect or use
biometric data? That some technologies simply will be off-limits to law enforcement?
This would be both unrealistic and ineffective.
Some danger of abuse, however remote, extends to any technology wielded by
government. Adolf Hitler and Josef Stalin managed to create a nightmare world without
any electronic biometrics at all. Human beings (neighborhood informants) also can serve
as surveillance for low-tech totalitarian police, as in Communist China. Declaring certain
technologies off-limits would not resolve the danger of abuse and would prevent
government from effectively carrying out legitimate functions.
Checks and Balances
If the answer to preserving freedom is not in declaring certain technology offlimits, where does the answer lie? The battle to preserve civil liberties and rights is more
about institutions and legal rights and powers than about this or that technology. The
Fourth Amendment does not say that the government may not collect, keep or store
information. It says the police must show probable cause and obtain a warrant from a
judge to conduct a search. The police are made accountable to the judiciary. This is an
institutional solution, an accountability solution, going back to the old idea of checks and
balances. Other constitutional principles—the freedom of speech, protection against the
confiscation of private property, the right to a jury trial and constitutional protections
against torture and cruel and unusual punishments—work together to hold back the
human tendencies of those who govern to take more power than we willingly would give.
Indeed, biometrics promise to make government more accountable and less likely
to misuse private information. Suppose biometrics technology were used to restrict the
access of government employees to citizens' tax records, criminal records and other files.
Logs show which government employees access the files and when. Victims of rogue
employees in government offices would stand a better chance of finding who had
accessed their records and holding the rogues accountable. Illicit access by hackers
coming from outside the system also would be reduced.
Because biometrics can help reduce the incidence of fraud and help police track
perpetrators of violence in the most high-risk zones, such as airports and nuclear
facilities, it may help preserve an open society in other areas. People terrified that
criminals lurk among them undetected are not people who will embrace freedom. So long
as our law-enforcement networks do not meaningfully help police target and quickly
identify wrongdoers, we all will have to endure more random searches, generalized
surveillance and heavy regulation.
The key to preserving our liberties does not lie with declaring biometrics offlimits for governments or anyone else. It lies in the realm of ideas and beliefs, powers and
rights. Authoritarianism is not a gadget, it is a state of mind.
Singleton, S. (2006). Physical Characteristic Recognition Technology Can Be Used to Preserve Privacy
Rights. In S. A. Kallen (Ed.), At Issue. Are Privacy Rights Being Violated? San Diego: Greenhaven Press.
(Reprinted from Insight on the News, 2002, February 25) Retrieved from
http://mylibrary.wilmu.edu/login?url=http://ic.galegroup.com/ic/ovic/ViewpointsDetailsPage/ViewpointsD
etailsWindow?disableHighlighting=false&displayGroupName=Viewpoints&currPage=&scanId=&query=
&prodId=OVIC&search_within_results=&p=OVIC&mode=view&catId=&limiter=&displayquery=&displayGroups=&contentModules=&action=e&sortBy=&documentId=GALE%7CEJ3010365211
&windowstate=normal&activityType=&failOverType=&commentary=&source=Bookmark&u=new90507
&jsid=2511332e939b9cf07e9aa172640adc2c
Article 4:
Facelock Could Replace
Alphanumeric Website Passwords;
Facial recognition technology could
put an end to pesky passwords
By: (Wolford, 2014)
Passwords are the worst. Your employer, your bank and your smartphone all
require one. And, for your own security, they want it to be unique and difficult, like, say,
X32$q#fs@Gg92. Of course, nobody does that, opting instead for something easier to
remember--and easier to hack. (Case in point: "123456" is the most common password on
the Internet; the runner-up is "password.")
As Web security suffers from the limits of human memory, some psychologists
and developers are experimenting with new kinds of passwords that play to your brain's
knack for remembering faces. Science shows that random letters and numbers are tricky
to remember. But faces? Those are easy.
A new set of experiments from researchers at the University of York, England,
explore memory's powerful connection with images of people. They discovered that to a
stranger, different pictures of the same person often appear to be of different people. But
when you look at people you know--from any angle, wearing sunglasses, whatever--you
recognize them.
The scientists call their system Facelock. It's not on the market yet, but here's how
it might work: A website asks you to choose a few people you recognize--the more
disparate in your life, the better. A good mix, for example, might be your best friend from
junior high, your cousin in Arizona, your hairdresser and your favorite folk cellist (it can't
be anyone too recognizable, or else it's a security risk). You'd be asked to select a few
different images of each of these people. Then, the next time you log in, you see a series
of 3-by-3 face grids--each one displaying eight random faces and one that you've chosen.
Find a familiar face four times in a row, and you're in.
For you, choosing the person you know is simple. For an intruder, it's a guessing
game.
"If you are familiar with that face, you see through all those superficial changes in
that image without even noticing," psychologist Rob Jenkins, the lead author of the study,
tells Newsweek.
There's apparently only one similar face-based password system currently on the
market. The technology, created by Reston, Virginia-based Passfaces, assigns faces to
you (as opposed to you picking them). "So far, adoption has been pretty slow," CEO Jon
Shaw tells Newsweek. "Lots of companies look at passwords as free, even though they
can have a high cost in a number of ways."
Wolford, B. (2014, July 4). Facelock Could Replace Alphanumeric Website Passwords;
Facial recognition technology could put an end to pesky passwords. Newsweek, 163(1).
Retrieved from
http://mylibrary.wilmu.edu/login?url=http://ic.galegroup.com/ic/ovic/MagazinesDetailsP
age/MagazinesDetailsWindow?disableHighlighting=false&displayGroupName=Magazin
es&currPage=&scanId=&query=&prodId=OVIC&search_within_results=&p=OVIC&m
ode=view&catId=&limiter=&displayquery=&displayGroups=&contentModules=&action=e&sortBy=&documentId=GALE%
7CA372949467&windowstate=normal&activityType=&failOverType=&commentary=&
source=Bookmark&u=new90507&jsid=b37598bb0c0e4c40eb2d69374aabad83
ESL 203 - Student Argumentative/Persuasive Writing Evaluation Form
Assignment
Student Name
Date
Course/Instructor
Criteria
1. The introduction captures the reader’s
attention and ends in a thesis statement
that expresses an issue and position,
contains persuasive logic, and previews a
framework for the essay or paper.
Unsatisfactory (1)
The introduction
does not capture the
reader’s attention,
and/or the thesis
statement does not
state the position,
preview the paper,
or contain
persuasive logic.
(
Purchase answer to see full
attachment