Renaissance anatomical illustrations often followed artistic conventions (situating the skeleton in
a lifelike pose in a landscape) and played wittily on the tensions between life and death. The
contemplation of the skull prefigures Hamlet’s later meditation. Line drawing, Valverde de
Hamusco, Historia de la composicion del cuerco humano (Rome: A. Salamanca & A. Lafreri,
1556).
TO
Mikuláš Teich,
true friend and scholar
Sick – Sick – Sick. . . O Sick – Sick – Spew
DAVID GARRICK,
in a letter
I’m sick of gruel, and the dietetics,
I’m sick of pills, and sicker of emetics,
I’m sick of pulses, tardiness or quickness,
I’m sick of blood, its thinness or its thickness, –
In short, within a word, I’m sick of sickness!
THOMAS HOOD,
‘Fragment’, c. 1844
They are shallow animals, having always employed their minds about
Body and Gut, they imagine that in the whole system of things there is
nothing but Gut and Body.
SAMUEL TAYLOR COLERIDGE, on doctors (1796)
CONTENTS
LIST OF FIGURES
LIST OF ILLUSTRATIONS
ACKNOWLEDGEMENTS
I
II
Introduction
The Roots of Medicine
III
Antiquity
IV
Medicine and Faith
V
The Medieval West
VI
VII
VIII
IX
X
XI
XII
Indian Medicine
Chinese Medicine
Renaissance
The New Science
Enlightenment
Scientific Medicine in the Nineteenth Century
Nineteenth-Century Medical Care
XIII
Public Medicine
XIV
From Pasteur to Penicillin
XV
XVI
XVII
XVIII
XIX
XX
XXI
XXII
Tropical Medicine, World Diseases
Psychiatry
Medical Research
Clinical Science
Surgery
Medicine, State and Society
Medicine and the People
The Past, the Present and the Future
FURTHER READING
INDEX
More praise for: The Greatest Benefit to Mankind
FIGURES
The main organs of the body
The four humours and the four elements
The heart and circulation, as understood by Harvey
Neurones and synapses, as understood by neurologists c. 1900
ILLUSTRATIONS
Imhotep.
Portrait of Hippocrates.
Portrait of Galen by Georg Paul Busch.
Portrait of Hildegard of Bingen by W. Marshall.
Portrait of Moses Maimonides by M. Gur-Aryeh.
The Wound Man, from Feldtbuch der Wundartzney by H. von Gersdorf.
The common willow, from The Herball, or General Historie of Plantes
by J. Gerard.
St Cosmas and St Damian performing the miracle of the black leg by
Alonso de Sedano.
A medieval Persian anatomical drawing.
A medieval European anatomy, from Margarita Philosophica by
Gregorius Reisch.
A Chinese acupuncture chart.
‘Two Surgeons Amputating the Leg and Arm of the Same Patient’ by ZS.
The frontispiece to Vesalius’s De humani corporis fabrica.
A medicine man or shaman.
An Indian doctor taking the pulse of a patient.
Portrait of Vesalius.
Portrait of William Harvey by J. Hall.
Portrait of Louise Bourgeois.
Portrait of William Hunter by J. Thomas.
Portrait of Benjamin Rush by R. W. Dodson.
An early seventeenth-century dissection.
Scenes from the plague in Rome of 1656.
A mother and baby, from Anatomia uteri humani gravidi by William
Hunter. Three stages of dissection.
Opthamology instruments, eye growths, a cateract operation and other
eye defects by R. Parr.
The preserved skull of a woman who had been suffering from syphilis.
Punch Cures the Gout, the Colic, and the Tisick by James Gillray.
Breathing a vein by J. Sneyd.
An Apothecary with a Pestle and Mortar to Make up a Prescription by A.
Park.
The interior of a pharmaceutical laboratory with people at work.
Philadelphia College of Pharmacy and Science.
Portrait of René Théophile Hyacyinthe Laennec
Portrait of Louis Pasteur by E. Pirou.
Portrait of William Gorgas.
Portrait of Joseph Lister.
Christiaan Barnard, photographed by B. Govender.
Mentally ill patients in the garden of an asylum by K. H. Merz.
Sigmund Freud, Carl Gustav Jung, Ernest Jones, Sandor Ferenczi,
Abraham
Bill and G. Stanley Hall.
A male smallpox patient in sickness and in health.
A Fijian man with elephantiasis of the left leg and scrotum.
An Allegory of Malaria by Maurice Dudevant.
A white doctor vaccinating African girls all wearing European clothes at
a mission station by Meisenbach.
Portrait of Florence Nightingale.
A Nurse Checking on a Playful Child by J. E. Sutcliffe.
‘A district health centre where crowds of local children are being
vaccinated’ by E. Buckman.
Franklin D. Roosevelt.
The Hôtel Dieu.
Lister and his assistants in the Victoria Ward.
A British hospital ward in the 1990s photographed by Emma Taylor.
The bones of a hand, with a ring on one finger, viewed through X-ray.
Tomographic scan of a brain in a skull.
ACKNOWLEDGEMENTS
will be heartily tired of hearing their praises sung yet
again. As always, Frieda Houser has been a marvellous secretary, keeping
everything on the road while I was deep in this book; Caroline Overy an
infallible research assistant; Sheila Lawler and Jan Pinkerton
indefatigable on the word-processor, and Andy Foley a wiz on the xerox
machine. I have been so lucky having their help and friendship for so
long. Thanks!
New to me have been the help and friendship I have received from
Fontana Press. The series of which this book forms a part was first
planned ten years ago, and since then Stuart Proffitt, Philip Gwyn Jones
and Toby Mundy have been ever supportive, skilled equally in the use of
sticks and carrots. Biddy Martin’s copy editing uncovered ghastly errors
and eliminated stylistic horrors, and Drusella Calvert compiled a truly
thorough index.
Friends old and new have read this book at various stages and shared
their thoughts, knowledge and criticisms with me. My thanks to Michael
Neve, who always reads my manuscripts, and to Bill Bynum and Tilli
Tansey for being patient with one who lacks a sound medico-scientific
education; and to Hannah Augstein, Cristina Alvarez, Natsu Hattori, Paul
Lerner, Eileen Magnello, Diana Manuel, Chandak Sengoopta, Sonu
Shamdasani and Cassie Watson, all of whom have read the text, saved me
from constellations of errors, shared insights and information, levelled
cogent criticisms and helped to keep me going at the moments when all
seemed sisyphean. Catherine Draycott and William Schupbach have been
immensely helpful with the illustrations. My aim first and foremost is to
tell a story that is clear, interesting and informative to students and
general readers alike. My thanks to all who have helped the book in that
THE USUAL SUSPECTS
direction.
I also wish to thank all the medical historians and other scholars whose
papers I have heard, whose books I have read, and whose company I have
shared over the last twenty years. I have the deepest admiration for the
expertise and the historical vision of scholars in this field. Panning from
Stone Age to
New Age, from Galen to Gallo, I cannot pretend personal knowledge
on more than a few frames of the times and topics covered. As will be
plain to see, I am everywhere profoundly dependent on the work of
others. It would simply be distracting in a work like this to acknowledge
all such debts one after another in thickets of footnotes. The Further
Reading must serve not just by way of recommendation for what to read
next but as a collective thank-you to all upon whose work I have freely
and gratefully drawn.
I have written this book because when my students and people at large
have asked me to recommend an up-to-date and readable single-volume
history of medicine, I have felt at a loss to know what to suggest. Rather
than bemoaning this fact, I thought I should have a shot at filling the gap.
Writing it has made it clear why so few have attempted this foolhardy
task.
The author is grateful to the following for permission to reproduce
extracts: from The Illustrated History of Surgery by Knut Haeger,
courtesy of Harold Starke Publishers; from A History of Medicine by Jean
Starobinski, courtesy of Prentice Hall; from Hippocrates I-IV and The
Complete Letters of Sigmund Freud to Wilhelm Fleiss, 1887–1904, edited
by Jeffrey Masson, courtesy of Harvard University Press; from The Odes
of Pindar, edited and translated by Richmond Lattimore, courtesy of
Chicago University Press; from Medicine Out of Control: The Anatomy of
Malignant Technology by Richard Taylor, courtesy of Sun Books; from A
History of Syphilis by Claude Quétel, courtesy of Blackwell Publishers;
from Doctor Dock: Teaching and Learning Medicine at the Turn of the
Century by Horace W. Davenport, courtesy of Rutgers University Press;
from Steven Sondheim’s West Side Story , copyright 1956, 1959 by the
Estate of Leonard Bernstein Music Publishing Company UC, Publisher;
Boosey & Hawkes Inc., Sole Agent. International Copyright secured. All
rights reserved; from The Horse Buggy and Doctor by A. E. Hertzler,
courtesy of the Hertzler Research Foundation; from Inequalities in
Health: The Black Report, crown copyright, reproduced with permission
of the Controller of Her Majesty’s Stationary Office: from the British
Medical Journal (1876), courtesy of the BMJ Publishing Group; from
The Doctor’s Job by Carl Binger © 1945 by W. W. Norton & Co. Inc.,
renewed © 1972 by Carl Binger. Reprinted by permission of W. W.
Norton & Co. Inc.; from Women’s Secrets: A Translation of PseudoAlbertus Magnus’s ‘De secretis mulierum’ by Helen Rodnite Lemay,
courtesy of the State University of New York Press © 1992; from Diary
of a Medical Nobody by Kenneth Lane, courtesy of Peters, Fraser and
Dunlop; from Sketch for a Historical Picture of the Progress of the
Human Mind translated by June Barra-clough, courtesy of Weidenfeld &
Nicholson. All reasonable efforts have been made by the author and the
publisher to trace the copyright holders of the quotations contained in this
publication. In the event that any of the untraceable copyright holders
comes forward after the publication of this edition, the author and the
publishers will endeavour to rectify the situation accordingly.
The main organs of the body
CHAPTER I
INTRODUCTION
THESE ARE STRANGE TIMES ,
when we are healthier than ever but more
anxious about our health. According to all the standard benchmarks,
we’ve never had it so healthy. Longevity in the West continues to rise – a
typical British woman can now expect to live to seventy-nine, eight years
more than just half a century ago, and over double the life expectation
when Queen Victoria came to the throne in 1837. Break the figures down
a bit and you find other encouraging signs even in the recent past; in
1950, the UK experienced 26,000 infant deaths; within half a century that
had fallen by 80 per cent. Deaths in the UK from infectious diseases
nearly halved between 1970 and 1992; between 1971 and 1991 stroke
deaths dropped by 40 per cent and coronary heart disease fatalities by 19
per cent – and those are diseases widely perceived to be worsening.
The heartening list goes on and on (15,000 hip replacements in 1978,
over double that number in 1993). In myriad ways, medicine continues to
advance, new treatments appear, surgery works marvels, and (partly as a
result) people live longer. Yet few people today feel confident, either
about their personal health or about doctors, healthcare delivery and the
medical profession in general. The media bombard us with medical news
– breakthroughs in biotechnology and reproductive technology for
instance. But the effect is to raise alarm more than our spirits.
The media specialize in scare-mongering but they also capture a public
mood. There is a pervasive sense that our well-being is imperilled by
‘threats’ all around, from die air we breathe to the food in the shops. Why
should we now be more agitated about pollution in our lungs than during
the awful urban smogs of the 1950s, when tens of thousands died of
winter bronchitis? Have we become health freaks or hypochondriacs
luxuriating in health anxieties precisely because we are so healthy and
long-lived that we have the leisure to enjoy the luxury of worrying?
These may be questions for a psychologist but, as this book aims to
demonstrate, they are also matters of historical inquiry, examining the
dialectics of medicine and mentalities. And to understand the dilemmas
of our times, such facts and fears need to be put into context of time and
place. We are today in the grip of opposing pressures. For one thing, there
is the ‘rising-expectations trap’: we have convinced ourselves that we can
and should be fitter, more youthful, sexier. In the long run, these are
impossibly frustrating goals, because in the long run we’re all dead
(though of course some even have expectations of cheating death).
Likewise, we are healthier than ever before, yet more distrustful of
doctors and the powers of what may broadly be called the ‘medical
system’. Such scepticism follows from the fact that medical science
seems to be fulfilling the wildest dreams of science fiction: the first
cloning of a sheep was recently announced and it will apparently be
feasible to clone a human being within a couple of years. In the same
week, an English widow was given permission to try to become pregnant
with her dead husband’s sperm (but only so long as she did it in
Belgium). These are amazing developments. We turn doctors into heroes,
yet feel equivocal about them.
Such ambiguities are not new. When in 1858 a statue was erected in the
recently built Trafalgar Square to Edward Jenner, the pioneer of smallpox
vaccination, protests followed and it was rapidly removed: a country
doctor amidst the generals and admirals was thought unseemly (it may
seem that those responsible for causing deaths rather than saving lives are
worthy of public honour). Even in Greek times opinions about medicine
were mixed; the word pharmakos meant both remedy and poison – ‘kill’
and ‘cure’ were apparently indistinguishable. And as Jonathan Swift
wryly reflected early in the eighteenth century, ‘Apollo was held the god
of physic and sender of diseases. Both were originally the same trade, and
still continue.’ That double idea – death and the doctors riding together –
has loomed large in history. It is one of the threads we will follow in
trying to assess the impact of medicine and responses to it – in trying to
assess Samuel Johnson’s accolade to the medical profession: ‘the greatest
benefit to mankind.’
‘The art has three factors, the disease, the patient, the physician,’ wrote
Hippocrates, the legendary Greek physician who has often been called the
father of medicine; and he thus suggested an agenda for history. This
book will explore diseases, patients and physicians, and their
interrelations, concentrating on some more than others. It is, as its subtitle suggests, a medical history.
My focus could have been on disease and its bearing on human history.
We have all been reminded of the devastating effects of pestilence by the
AIDS epidemic. In terms of death toll, cultural shock and socio-economic
destruction, the full impact of AIDS cannot yet be judged. Other ‘hot
viruses’ may be coming into the arena of history which may prove even
more calamitous. Historians at large, who until recently tended to
chronicle world history in blithe ignorance of or indifference to disease,
now recognize the difference made by plague, cholera and other
pandemics. Over the last generation, distinguished practitioners have
pioneered the study of ‘plagues and peoples’; and have tried to give due
consideration to these epidemiological and demographic matters in the
following chapters. But they are not my protagonists, rather the backdrop.
Equally this book might have focused upon everyday health, common
health beliefs and routine health care in society at large. The social
history of medicine now embraces ‘people’s history’, and one of its most
exciting developments has been the attention given to beliefs about the
body, its status and stigmas, its race, class and gender representations.
The production and reproduction, creation and recreation of images of
Self and Other have formed the subject matter of distinguished books.
Such historical sociologies or cultural anthropologies – regarding the
body as a book to be decoded – reinforce our awareness of the
importance, past and present, of familiar beliefs about health and its
hazards, about taboo and transgression. When a body becomes a clue to
meaning, popular ideas of health and sickness, life and death, must be of
central historical importance. I have written, on my own and with others,
numerous books exploring lay health cultures in the past, from a ‘bottomup’, patients’ point of view, and hope soon to publish a further work on
the historical significance of the body.
This history, however, is different. It sets the history of medical
thinking and medical practice at stage centre. It concentrates on medical
ideas about disease, medical teachings about healthy and unhealthy
bodies, and medical models of life and death. Seeking to avoid
anachronism and judgmentalism, I devote prime attention to those people
and professional groups who have been responsible for such beliefs and
practices – that is healers understood in a broad sense. This book is
principally about what those healers have done, individually and
collectively, and the impact of their ideas and actions. While placing
developments in a wider context, it surveys medical theory and practices.
This approach may sound old-fashioned, a resurrection of the
Whiggish ‘great docs’ history which celebrated the triumphal progress of
medicine from ignorance through error to science. But I come not to
praise medicine – nor indeed to blame it. I do believe that medicine has
played a major and growing role in human societies and for that reason
its history needs to be explored so that its place and powers can be
understood. I say here, and I will say many times again, that the
prominence of medicine has lain only in small measure in its ability to
make the sick well. This always was true, and remains so today.
I discuss disease from a global viewpoint; no other perspective makes
sense. I also examine medicine the world over. Chapter 2 surveys the
emergence of health practices and medical beliefs in some early
societies; Chapter 3 discusses the rise of formal, written medicine in the
Middle East and Egypt, and in Greece and Rome; Chapter 4 explores
Islam; separate chapters discuss Indian and Chinese medicine; Chapter 8
takes in the Americas; Chapter 15 surveys medicine in more recent
colonial contexts, and other chapters have discussions of disorders in the
Third World, for instance deficiency diseases. The book is thus not
narrowly or blindly ethnocentric.
Nevertheless, I devote most attention to what is called ‘western’
medicine, because western medicine has developed in ways which have
made it uniquely powerful and led it to become uniquely global. Its
ceaseless spread throughout the world owes much, doubtless, to western
political and economic domination. But its dominance has increased
because it is perceived, by societies and the sick, to ‘work’ uniquely well,
at least for many major classes of disorders. (Parenthetically, it can be
argued that western political and economic domination owes something
to the path-breaking powers of quinine, antibiotics and the like.) To the
world historian, western medicine is special. It is conceivable that in a
hundred years time traditional Chinese medicine, shamanistic medicine
or Ayurvedic medicine will have swept the globe; if that happens, my
analysis will look peculiarly dated and daft. But there is no real
indication of that happening, while there is every reason to expect the
medicine of the future to be an outgrowth of present western medicine –
or at least a reaction against it. What began as the medicine of Europe is
becoming the medicine of humanity. For that reason its history deserves
particular attention.
Western medicine, I argue, has developed radically distinctive
approaches to exploring the workings of the human body in sickness and
in health. These have changed the ways our culture conceives of the body
and of human life. To reduce complex matters to crass terms, most
peoples and cultures the world over, throughout history, have construed
life (birth and death, sickness and health) primarily in the context of an
understanding of the relations of human beings to the wider cosmos:
planets, stars, mountains, rivers, spirits and ancestors, gods and demons,
the heavens and the underworld, and so forth. Some traditions, notably
those reflected in Chinese and Indian learned medicine, while being
concerned with the architecture of the cosmos, do not pay great attention
to the supernatural. Modern western thinking, however, has become
indifferent to all such elements. The West has evolved a culture
preoccupied with the self, with the individual and his or her identity, and
this quest has come to be equated with (or reduced to) the individual body
and the embodied personality, expressed through body language. Hamlet
wanted this too solid flesh to melt away. That – except in the context of
slimming obsessions – is the last thing modern westerners want to happen
to their flesh; they want it to last as long as possible.
Explanations of why and how these modern, secular western attitudes
have come about need to take many elements into account. Their roots
may be found in the philosophical and religious traditions they have
grown out of. They have been stimulated by economic materialism, the
preoccupation with worldly goods generated by the devouring, reckless
energies of capitalism. But they are also intimately connected with the
development of medicine – its promise, project and products.
Whereas most traditional healing systems have sought to understand
the relations of the sick person to the wider cosmos and to make
readjustments between individual and world, or society and world, the
western medical tradition explains sickness principally in terms of the
body itself – its own cosmos. Greek medicine dismissed supernatural
powers, though not macrocosmic, environmental influences; and from the
Renaissance the flourishing anatomical and physiological programmes
created a new confidence among investigators that everything that needed
to be known could essentially be discovered by probing more deeply and
ever more minutely into the flesh, its systems, tissues, cells, its DNA.
This has proved an infinitely productive inquiry, generating first
knowledge and then power, including on some occasions the power to
conquer disease. The idea of probing into bodies, living and dead (and
especially human bodies) with a view to improving medicine is more or
less distinctive to the European medical tradition. For reasons technical,
cultural, religious and personal, it was not done in China or India,
Mesopotamia or pharaonic Egypt. Dissection and dissection-related
experimentation were performed only on animals in classical Greece, and
rarely. A medicine that seriously and systematically investigated the stuff
of bodies came into being thereafter – in Alexandria, then in the work of
Galen, then in late medieval Italy. The centrality of anatomy to
medicine’s project was proclaimed in the Renaissance and became the
foundation stone for the later edifice of scientific medicine: physiological
experimentation, pathology, microscopy, biochemistry and all the other
later specialisms, to say nothing of invasive surgery.
This was not the only course that medicine could have taken; as is
noted below, it was not the course other great world medical systems
took, cultivating their own distinct clinical skills, diagnostic arts and
therapeutic interventions. Nor did it enjoy universal approval: protests in
Britain around 1800 about body-snatching and later antivivisectionist
lobbies show how sceptical public opinion remained about the activities
of anatomists and physicians, and suspicion has continued to run high.
However, that was the direction western medicine followed, and,
bolstered by science at large, it generated a powerful medicine, largely
independent of its efficacy as a rational social approach to good health.
The emergence of this high-tech scientific medicine may be a prime
example of what William Blake denounced as ‘single vision’, the kind of
myopia which (literally and metaphorically) comes from looking
doggedly down a microscope. Single vision has its limitations in
explaining the human condition; this is why Coleridge called doctors
‘shallow animals’, who ‘imagine that in the whole system of things there
is nothing but Gut and Body’. Hence the ability of medicine to understand
and counter pathology has always engendered paradox. Medicine has
offered the promise of ‘the greatest benefit to mankind’, but not always
on terms palatable to and compatible with cherished ideals. Nor has it
always delivered the goods. The particular powers of medicine, and the
paradoxes its rationales generate, are what this book is about.
***
It may be useful to offer a brief resumé of the main themes of the book,
by way of a sketch map for a long journey.
All societies possess medical beliefs: ideas of life and death, disease
and cure, and systems of healing. Schematically speaking, the medical
history of humanity may be seen as a series of stages. Belief systems the
world over have attributed sickness to illwill, to malevolent spirits,
sorcery, witchcraft and diabolical or divine intervention. Such ways of
thinking still pervade the tribal communities of Africa, the Amazon basin
and the Pacific; they were influential in Christian Europe till the ‘age of
reason’, and retain a residual shadow presence. Christian Scientists and
some other Christian sects continue to view sickness and recovery in
providential and supernatural terms; healing shrines like Lourdes remain
popular within the Roman Catholic church, and faith-healing retains a
mass following among devotees of television evangelists in the United
States.
In Europe from Graeco-Roman antiquity onwards, and also among the
great Asian civilizations, the medical profession systematically replaced
transcendental explanations by positing a natural basis for disease and
healing. Among educated lay people and physicians alike, the body
became viewed as integral to law-governed cosmic elements and regular
processes. Greek medicine emphasized the microcosm/macrocosm
relationship, the correlations between the healthy human body and the
harmonies of nature. From Hippocrates in the fifth century BC through to
Galen in the second century AD, ‘humoral medicine’ stressed the
analogies between the four elements of external nature (fire, water, air
and earth) and the four humours or bodily fluids (blood, phlegm, choler
or yellow bile and black bile), whose balance determined health. The
humours found expression in the temperaments and complexions that
marked an individual. The task of hygiene was to maintain a balanced
constitution, and the role of medicine was to restore the balance when
disturbed. Parallels to these views appear in the classical Chinese and
Indian medical traditions.
The medicine of antiquity, transmitted to Islam and then back to the
medieval West and remaining powerful throughout the Renaissance, paid
great attention to general health maintenance through regulation of diet,
exercise, hygiene and lifestyle. In the absence of decisive anatomical and
physiological expertise, and without a powerful arsenal of cures and
surgical skills, the ability to diagnose and make prognoses was highly
valued, and an intimate physician-patient relationship was fostered. The
teachings of antiquity, which remained authoritative until the eighteenth
century and still supply subterranean reservoirs of medical folklore, were
more successful in assisting people to cope with chronic conditions and
soothing lesser ailments than in conquering life-threatening infections
which became endemic and epidemic in the civilized world: leprosy,
plague, smallpox, measles, and, later, the ‘filth diseases’ (like typhus)
associated with urban squalor.
This personal tradition of bedside medicine long remained popular in
the West, as did its equivalents in Chinese and Ayurvedic medicine. But
in Europe it was supplemented and challenged by the creation of a more
‘scientific’ medicine, grounded, for the first time, upon experimental
anatomical and physiological investigation, epitomized from the fifteenth
century by the dissection techniques which became central to medical
education. Landmarks in this programme include the publication of De
humani corporis fabrica (1543) by the Paduan professor, Andreas
Vesalius, a momentous anatomical atlas and a work which challenged
truths received since Galen; and William Harvey’s De motu cordis (1628)
which put physiological inquiry on the map by experiments
demonstrating the circulation of the blood and the heart’s role as a pump.
Post-Vesalian investigations dramatically advanced knowledge of the
structures and functions of the living organism. Further inquiries brought
the unravelling of the lymphatic system and the lacteals, and the
eighteenth and nineteenth centuries yielded a finer grasp of the nervous
system and the operations of the brain. With the aid of microscopes and
the laboratory, nineteenth-century investigators explored the nature of
body tissue and pioneered cell biology; pathological anatomy came of
age. Parallel developments in organic chemistry led to an understanding
of respiration, nutrition, the digestive system and deficiency diseases, and
founded such specialities as endocrinology. The twentieth century
became the age of genetics and molecular biology.
Nineteenth-century medical science made spectacular leaps forward in
the understanding of infectious diseases. For many centuries, rival
epidemiological theories had attributed fevers to miasmas (poisons in the
air, exuded from rotting animal and vegetable material, the soil, and
standing water) or to contagion (person-to-person contact). From the
1860s, the rise of bacteriology, associated especially with Louis Pasteur
in France and Robert Koch in Germany, established the role of micro
organic pathogens. Almost for the first time in medicine, bacteriology led
directly to dramatic new cures.
In the short run, the anatomically based scientific medicine which
emerged from Renaissance universities and the Scientific Revolution
contributed more to knowledge than to health. Drugs from both the Old
and New Worlds, notably opium and Peruvian bark (quinine) became
more widely available, and mineral and metal-based pharmaceutical
preparations enjoyed a great if dubious vogue (e.g., mercury for syphilis).
But the true pharmacological revolution began with the introduction of
sulfa drugs and antibiotics in the twentieth century, and surgical success
was limited before the introduction of anaesthetics and antiseptic
operating-room conditions in the mid nineteenth century. Biomedical
understanding long outstripped breakthroughs in curative medicine, and
the retreat of the great lethal diseases (diphtheria, typhoid, tuberculosis
and so forth) was due, in the first instance, more to urban improvements,
superior nutrition and public health than to curative medicine. The one
early striking instance of the conquest of disease – the introduction first
of smallpox inoculation and then of vaccination – came not through
‘science’ but through embracing popular medical folklore.
From the Middle Ages, medical practitioners organized themselves
professionally in a pyramid with physicians at the top and surgeons and
apothecaries nearer the base, and with other healers marginalized or
vilified as quacks. Practitioners’ guilds, corporations and colleges
received royal approval, and medicine was gradually incorporated into
the public domain, particularly in German-speaking Europe where the
notion of ‘medical police’ (health regulation and preventive public
health) gained official backing in the eighteenth century. The state
inevitably played the leading role in the growth of military and naval
medicine, and later in tropical medicine. The hospital sphere, however,
long remained largely the Church’s responsibility, especially in Roman
Catholic parts of Europe. Gradually the state took responsibility for the
health of emergent industrial society, through public health regulation
and custody of the insane in the nineteenth century, and later through
national insurance and national health schemes. These latter
developments met fierce opposition from a medical profession seeking to
preserve its autonomy against encroaching state bureaucracies.
The latter half of the twentieth century has witnessed the continued
phenomenal progress of capital-intensive and specialized scientific
medicine: transplant surgery and biotechnology have captured the public
imagination. Alongside, major chronic and psychosomatic disorders
persist and worsen – jocularly expressed as the ‘doing better but feeling
worse’ syndrome – and the basic health of the developing world is
deteriorating. This situation exemplifies and perpetuates a key facet and
paradox of the history of medicine: the unresolved disequilibrium
between, on the one hand, the remarkable capacities of an increasingly
powerful science-based biomedical tradition and, on the other, the wider
and unfulfilled health requirements of economically impoverished,
colonially vanquished and politically mismanaged societies. Medicine is
an enormous achievement, but what it will achieve practically for
humanity, and what those who hold the power will allow it to do, remain
open questions.
The late E. P. Thompson (1924–1993) warned historians against what he
called the enormous condescension of posterity. I have tried to
understand the medical systems I discuss rather than passing judgment on
them; I have tried to spell them out in as much detail as space has
permitted, because engagement with detail is essential if the cognitive
power of medicine is to be appreciated.
Eschewing anachronism, judgmentalism and history by hindsight does
not mean denying that there are ways in which medical knowledge has
progressed. Harvey’s account of the cardiovascular system was more
correct than Galen’s; the emergence of endocrinology allowed the
development in the 1920s of insulin treatments which saved the lives of
diabetics. But one must not assume that diabetes then went away: no cure
has been found for that still poorly understood disease, and it continues to
spread as a consequence of western lifestyles. Indeed one could argue that
the problem is now worse than when insulin treatment was discovered.
Avoiding condescension equally does not mean one must avoid
‘winners’ history. This book unashamedly gives more space to the Greeks
than the Goths, more attention to Hippocrates than to Greek rootgatherers, and stresses strands of development leading from Greek
medicine to the biomedicine now in the saddle. I do not think that
‘winners’ should automatically be privileged by historians (I have myself
written and advocated writing medical history from the patients’ view),
but there is a good reason for bringing the winners to the foreground – not
because they are ‘best’ or ‘right’ but because they are powerful. One can
study winners without siding with them.
Writing this book has not only made me more aware than usual of my
own ignorance; it has brought home the collective and largely
irremediable ignorance of historians about the medical history of
mankind. Perhaps the most celebrated physician ever is Hippocrates yet
we know literally nothing about him. Neither do we know anything
concrete about most of the medical encounters there have ever been. The
historical record is like the night sky: we see a few stars and group them
into mythic constellations. But what is chiefly visible is the darkness.
CHAPTER II
THE ROOTS OF MEDICINE
PEOPLES AND PLAGUES
. The climate was clement, nature
freely bestowed her bounty upon mankind, no lethal predators lurked, the
lion lay down with the lamb and peace reigned. In that blissful long-lost
Arcadia, according to the Greek poet Hesiod writing around 700 BC, life
was ‘without evils, hard toil, and grievous disease’. All changed.
Thereafter, wrote the poet, ‘thousands of miseries roam among men, the
land is full of evils and full is the sea. Of themselves, diseases come upon
men, some by day and some by night, and they bring evils to the mortals.’
The Greeks explained the coming of pestilences and other troubles by
the fable of Pandora’s box. Something similar is offered by JudaeoChristianity. Disguised in serpent’s clothing, the Devil seduces Eve into
tempting Adam to taste the forbidden fruit. By way of punishment for
that primal disobedience, the pair are banished from Eden; Adam’s sons
are condemned to labour by the sweat of their brow, while the daughters
of Eve must bring forth in pain; and disease and death, unknown in the
paradise garden, become the iron law of the post-lapsarian world,
thenceforth a vale of tears. As in the Pandora fable and scores of parallel
legends the world over, the Fall as revealed in Genesis explains how
suffering, disease and death become the human condition, as a
IN THE BEGINNING WAS THE GOLDEN AGE
consequence of original sin. The Bible closes with foreboding: ‘And I
looked, and behold a pale horse’ prophesied the Book of Revelation: ‘and
his name that sat on him was Death, and Hell followed with him. And
power was given unto them over the fourth part of the earth, to kill with
sword, and with hunger, and with death, and with the beasts of the earth.’
Much later, the eighteenth-century physician George Cheyne drew
attention to a further irony in the history of health. Medicine owed its
foundation as a science to Hippocrates and his successors, and such
founding fathers were surely to be praised. Yet why had medicine
originated among the Greeks? It was because, the witty Scotsman
explained, being the first civilized, intellectual people, with leisure to
cultivate the life of the mind, they had frittered away the rude vitality of
their warrior ancestors – the heroes of the Iliad- and so had been the first
to need medical ministrations. This ‘diseases of civilization, paradox had
a fine future ahead of it, resonating throughout Nietzsche and Freud’s
Civilization and its Discontents(1930). Thus to many, from classical
poets up to the prophets of modernity, disease has seemed the dark side
of development, its Jekyll-and-Hyde double: progress brings pestilences,
society sickness.
Stories such as these reveal the enigmatic play of peoples, plagues and
physicians which is the thread of this book, scotching any innocent notion
that the story of health and medicine is a pageant of progress.
Pandora’s box and similar just-so stories tell a further tale moreover,
that plagues and pestilences are not acts of God or natural hazards; they
are of mankind’s own making. Disease is a social development no less
than the medicine that combats it.
In the beginning . . . Anthropologists now maintain that some five million
years ago in Africa there occurred the branching of the primate line
which led to the first ape men, the low-browed, big-jawed hominid
Australopithecines. Within a mere three million years Homo erectus had
emerged, our first entirely upright, large-brained ancestor, who learned
how to make fire, use stone tools, and eventually developed speech.
Almost certainly a carnivorous hunter, this palaeolithic pioneer fanned
out a million years or so ago from Africa into Asia and Europe.
Thereafter a direct line leads to Homo sapiens who emerged around
150,000 BC.
The life of early mankind was not exactly arcadian. Archaeology and
paleopathology give us glimpses of forebears who were often malformed,
racked with arthritis and lamed by injuries – limbs broken in accidents
and mending awry. Living in a dangerous, often harsh and always
unpredictable environment, their lifespan was short. Nevertheless,
prehistoric people escaped many of the miseries popularly associated
with the ‘fall’; it was later developments which exposed their
descendants to the pathogens that brought infectious disease and have
since done so much to shape human history.
The more humans swarmed over the globe, the more they were
themselves colonized by creatures capable of doing harm, including
parasites and pathogens. There have been parasitic helminths (worms),
fleas, ticks and a host of arthropods, which are the bearers of ‘arbo’
(arthropod-borne) infections. There have also been the micro-organisms
like bacteria, viruses and protozoans. Their very rapid reproduction rates
within a host provoke severe illness but, as if by compensation, produce
in survivors immunity against reinfection. All such disease threats have
been and remain locked with humans in evolutionary struggles for the
survival of the fittest, which have no master plot and grant mankind no
privileges.
Despite carbon-14 and other sophisticated techniques used by
palaeopathologists, we lack any semblance of a day-to-day health chart
for early Homo sapiens. Theories and guesswork can be supported by
reference to so-called ‘primitive’ peoples in the modern world, for
instance Australian aborigines, the Hadza of Tanzania, or the !Kung San
bush people of the Kalahari. Our early progenitors were hunters and
gatherers. Pooling tools and food, they lived as nomadic opportunistic
omnivores in scattered familial groups of perhaps thirty or forty.
Infections like smallpox, measles and flu must have been virtually
unknown, since the micro-organisms that cause contagious diseases
require high population densities to provide reservoirs of susceptibles.
And because of the need to search for food, these small bands did not stay
put long enough to pollute water sources or accumulate the filth that
attracts disease-spreading insects. Above all, isolated hunter-foragers did
not tend cattle and the other tamed animals which have played such an
ambiguous role in human history. While meat and milk, hides and horns
made civilization possible, domesticated animals proved perennial and
often catastrophic sources of illness, for infectious disease riddled beasts
long before spreading to humans.
Our ‘primitive’ ancestors were thus practically free of the pestilences
that ambushed their ‘civilized’ successors and have plagued us ever
since. Yet they did not exactly enjoy a golden age, for, together with
dangers, injuries and hardships, there were ailments to which they were
susceptible. Soil-borne anaerobic bacteria penetrated through skin
wounds to produce gangrene and botulism; anthrax and rabies were
picked up from animal predators like wolves; infections were acquired
through eating raw animal flesh, while game would have transmitted the
microbes of relapsing fever (like typhus, a louse-borne disease),
brucellosis and haemorrhagic fevers. Other threats came from organisms
co-evolving with humans, including tapeworms and such bacteria as
Treponema, the agent of syphilis, and the similar skin infection, yaws.
Hunter-gatherers being omnivores, they were probably not
malnourished, at least not until rising populations had hunted to
extinction most of the big game roaming the savannahs and prairies.
Resources and population were broadly in balance. Relative freedom
from disease encouraged numbers to rise, but all were prey to climate,
especially during the Ice Age which set in from around 50,000 BC.
Famine took its toll; lives would have been lost in hunting and
skirmishing; childbirth was hazardous, fertility probably low, and
infanticide may have been practised. All such factors kept numbers in
check.
For tens of thousands of years there was ample territory for dispersal,
as pressure on resources drove migration ‘out of Africa’ into all corners
of the Old World, initially to the warm regions of Asia and southern
Europe, but then farther north into less hospitable climes. These nomadic
ways continued until the end of the last Ice Age (the Pleistocene) around
12,000–10,000 years ago brought the invention of agriculture.
Contrary to the Victorian assumption that farming arose out of
mankind’s inherent progressiveness, it is now believed that tilling the soil
began because population pressure and the depletion of game supplies
left no alternative: it was produce more or perish. By around 50,000 B c,
mankind had spilled over from the Old World to New Guinea and
Australasia, and by 10,000 BC (perhaps much earlier) to the Americas as
well (during the last Ice Age the lowering of the oceans made it possible
to cross by land bridge from Siberia to Alaska). But when the ice caps
melted around ten thousand years ago and the seas rose once more, there
were no longer huge tracts of land filled with game but empty of humans
and so ripe for colonization. Mankind faced its first ecological crisis – its
first survival test.
Necessity proved the mother of invention, and Stone Age stalkers,
faced with famine – elk and gazelle had thinned out, leaving hogs, rabbits
and rodents – were forced to grow their own food and settle in one place.
Agriculture enhanced mankind’s capacity to harness natural resources,
selectively breeding wild grasses into domesticated varieties of grains,
and bringing dogs, cattle, sheep, goats, pigs, horses and poultry under
control. This change had the rapidity of a revolution: until around 10,000
years ago, almost all human groups were hunter-gatherers, but within a
few thousand years cultivators and pastoralists predominated. The
‘neolithic revolution’ was truly epochal.
In the fertile crescent of the Middle East, wheat, barley, peas and
lentils were cultivated, and sheep, pigs and goats herded; the neolithic
peoples of south-east Asia exploited rice, sweet potatoes, ducks and
chickens; in Mesoamerica, it was maize, beans, cassava, potatoes and
guinea pigs. The land which a nomadic band would have stripped like
locusts before moving on was transformed by new management
techniques into a resource reservoir capable of supporting thousands, year
in, year out. And once agriculture took root, with its systematic planting
of grains and lentils and animal husbandry, numbers went on spiralling,
since more could be fed. The labour-intensiveness of clearing woodland
and scrub, weeding fields, harvesting crops and preparing food
encouraged population growth and the formation of social hierarchies,
towns, courts and kingdoms. But while agriculture rescued people from
starvation, it unleashed a fresh danger: disease.
The agricultural revolution ensured human domination of planet earth:
the wilderness was made fertile, the forests became fields, wild beasts
were tamed or kept at bay; but pressure on resources presaged the
disequilibrium between production and reproduction that provoked later
Malthusian crises, as well as leading to ecological deterioration. As
hunters and gatherers became shepherds and farmers, the seeds of disease
were sown. Prolific pathogens once exclusive to animals were transferred
to swineherds and goatherds, ploughmen and horsemen, initiating the
ceaseless evolutionary adaptations which have led to a current situation
in which humans share no fewer than sixty-five micro-organic diseases
with dogs (supposedly man’s best friend), and only slightly fewer with
cattle, sheep, goats, pigs, horses and poultry.
Many of the worst human diseases were created by proximity to animals.
Cattle provided the pathogen pool with tuberculosis and viral poxes like
smallpox. Pigs and ducks gave humans their influenzas, while horses
brought rhinoviruses and hence the common cold. Measles, which still
kills a million children a year, is the result of rinderpest (canine
distemper) jumping between dogs or cattle and humans. Moreover, cats,
dogs, ducks, hens, mice, rats and reptiles carry bacteria like Salmonella,
leading to often fatal human infections; water polluted with animal faeces
also spreads polio, cholera, typhoid, viral hepatitis, whooping cough and
diphtheria.
Settlement helped disease to settle in, attracting disease-spreading
insects, while worms took up residence within the human body.
Parasitologists and palaeopathologists have shown how the parasitic
roundworm Ascaris, a nematode growing to over a foot long, evolved in
humans, probably from pig ascarids, producing diarrhoea and
malnutrition. Other helminths or wormlike fellow-travellers became
common in the human gut, including the Enterobius (pinworm or
threadworm), the yards-long hookworm, and the filarial worms which
cause elephantiasis and African river blindness. Diseases also established
themselves where agriculture depended upon irrigation – in
Mesopotamia, Egypt, India and around the Yellow (Huang) River in
China. Paddyfields harbour parasites able to penetrate the skin and enter
the bloodstream of barefoot workers, including the forked-tailed blood
fluke Schistosoma which utilizes aquatic snails as a host and causes
bilharzia or schistosomiasis (graphically known as ‘big belly’),
provoking mental and physical deterioration through the chronic
irritation caused by the worm. Investigation of Egyptian mummies has
revealed calcified eggs in liver and kidney tissues, proving the presence
of schistosomiasis in ancient Egypt. (Mummies tell us much more about
the diseases from which Egyptians suffered; these included gallstones,
bladder and kidney stones, mastoiditis and numerous eye diseases, and
many skeletons show evidence of rheumatoid arthritis.) In short,
permanent settlement afforded golden opportunities for insects, vermin
and parasites, while food stored in granaries became infested with
insects, bacteria, fungoid toxins and rodent excrement. The scales of
health tipped unfavourably, with infections worsening and human vitality
declining.*
Moreover, though agriculture enabled more mouths to be fed, it meant
undue reliance on starchy cereal monocultures like maize, high in
calories but low in proteins, vitamins and minerals; reduced nutritional
levels allowed deficiency diseases like pellagra, marasmus, kwashiorkor
and scurvy to make their entry onto the human stage. Stunted people are
more vulnerable to infections, and it is a striking comment on ‘progress’
that neolithic skeletons are typically some inches shorter than their
palaeolithic precursors.
MALARIA
Settlement also brought malaria. ‘There is no doubt’, judged the
distinguished Australian immunologist, Macfarlane Burnet (1899–1985),
‘that malaria has caused the greatest harm to the greatest number’ – not
through cataclysms, as with bubonic plague, but through its continual
winnowing effect. First in sub-Saharan Africa and elsewhere since,
conversion of forests into farmland has created environments tailormade
for mosquitoes: warm waterholes, furrows and puddles ideal for rapid
breeding. Malaria is worth pausing over, since it has coexisted with
humans for thousands of years and remains out of control across much of
the globe.
The symptoms of malarial fevers were familiar to the Greeks, but were
not explained until the advent of tropical medicine around 1900. They are
produced by the microscopic protozoan parasite Plasmodium, which lives
within the body of an Anopheles mosquito, and is transmitted to humans
through mosquito bites. The parasites move through the bloodstream to
the liver, where they breed during an incubation stage of a couple of
weeks. Returning to the blood, they attack red blood cells, which break
down, leading to waves of violent chills and high fever.
Malarial parasites have distinct periodicities. Plasmodium vivax, the
organism causing benign tertian malaria, once present in the English
fenlands, has an incubation period of ten to seventeen days. The fever
lasts from two to six hours, returning every third day (hence ‘tertian’);
marked by vomiting and diarrhoea, such attacks may recur for two
months or longer. In time, as Greek doctors observed, the spleen enlarges,
and the patient becomes anaemic and sometimes jaundiced. Quartan
malaria, caused by Plasmodium malariae, is another mild variety.
Malignant tertian malaria, caused by Plasmodium falciparum, is the
most lethal, producing at least 95 per cent of all malarial deaths. The
incubation period is shorter but the fever more prolonged; it may be
continuous, remittent or intermittent. Plasmodium falciparum proliferates
fast, producing massive destruction of red blood cells and hence
dangerous anaemia; the liver and spleen also become enlarged.
Malaria may sometimes appear as quotidian fever, with attacks lasting
six to twelve hours – the result of multiple infection. Patients may also
develop malarial cachexia, with yellowing of the skin and severe spleen
and liver enlargement; autopsy shows both organs darkened with a black
pigment derived from the haemoglobin of the destroyed red blood cells.
What the ancients called melancholy may have been a malarial condition.
Malaria shadowed agricultural settlements. From Africa, it became
established in the Near and Middle East and the Mediterranean littoral.
The huge attention Graeco-Roman medicine paid to ‘remittent fevers’
shows how seriously the region was affected, and some historians
maintain the disease played its part in the decline and fall of the Roman
empire. Within living memory, malaria remained serious in the Roman
Campagna and the Pontine marshes along Italy’s west coast.
Coastal Africa was and remains heavily malarial, as are the Congo, the
Niger and hundreds of other river basins. Indigenous West African
populations developed a genetically controlled characteristic, the ‘sicklecell’, which conferred immunity against virulent Plasmodium falciparum.
But, though protective, this starves its bearers, who are prone to debility
and premature death: typical of such evolutionary trade-offs, gains and
losses are finely balanced.
India was also ripe for malarial infection. Ayurvedic medical texts (see
Chapter Six) confirm the antiquity of the disease in the subcontinent.
China, too, became heavily infected, especially the coastal strip from
Shanghai to Macao. And from the sixteenth century Europeans shipped it
to Mesoamerica: vivax malaria went to the New World in the blood of the
Spanish conquistadores, while falciparum malaria arrived with the
African slaves whom the Europeans imported to replace the natives they
and their pestilences had wiped out.
Malaria was just one health threat among many which set in with
civilization as vermin learned to cohabit with humans, insects spread
gastroenteric disorders, and contact with rodents led to human rickettsial
(lice-, mite- and tick-borne) arbo diseases like typhus. Despite such
infections encouraged by dense settlement and its waste and dirt, man’s
restless inventive energies ensured that communities, no matter how
unhealthy, bred rising populations; and more humans spawned more
diseases in upward spirals, temporarily and locally checked but never
terminated. Around 10,000 BC, before agriculture, the globe’s human
population may have been around 5 million; by 500 BC it had probably
leapt to 100 million; by the second century AD that may have doubled; the
1990 figure was some 5,292 million, with projections suggesting 12
billion by 2100.
Growing numbers led to meagre diets, the weak and poor inevitably
bearing the brunt. But though humans were often malnourished, parasiteriddled and pestilence-smitten, they were not totally defenceless.
Survivors of epidemics acquired some protection, and the mechanisms of
evolution meant that these acquired sophisticated immune systems
enabling them to coexist in a ceaseless war with their micro-organic
assailants. Immunities passed from mothers across the placenta or
through breast-feeding gave infants some defence against germ invasion.
Tolerance was likewise developed towards parasitic worms, and certain
groups developed genetic shields, as with the sickle-cell trait. Biological
adaptation might thus take the edge off lethal afflictions.
THE ERA OF EPIDEMICS
Some diseases, however, were not so readily coped with: those caused by
the zoonoses (animal diseases transmissible to man) which menaced once
civilization developed. By 3000 BC cities like Babylon, with populations
of scores of thousands, were rising in Mesopotamia and Egypt, in the
Indus Valley and on the Yellow River, and later in Mesoamerica. In the
Old World, such settlements often maintained huge cattle herds, from
which lethal pathogens, including smallpox, spread to humans, while
originally zoognostic conditions – diphtheria, influenza, chicken-pox,
mumps – and other illnesses also had a devastating impact. Unlike
malaria, these needed no carriers; being directly contagious, they spread
readily and rapidly.
The era of epidemics began. And though some immunity would
develop amongst the afflicted populations, the incessant outreach of
civilization meant that merchants, mariners and marauders would
inevitably bridge pathogen pools, spilling diseases onto virgin
susceptibles. One nation’s familiar ‘tamed’ disease would be another’s
plague, as trade, travel and war detonated pathological explosions.
The immediate consequence of the invasion of a town by smallpox or
another infection was a fulminating epidemic and subsequent decimation.
Population recovery would then get under way, only for survivors ‘heirs
to be blitzed by the same or a different pestilence, and yet another, in tide
upon tide. Settlements big enough to host such contagions might shrink to
become too tiny. With almost everybody slain or immune, the pestilences
would withdraw, victims of their own success, moving on to storm other
virgin populations, like raiders seeking fresh spoils. New diseases thus
operated as brutal Malthusian checks, sometimes shaping the destinies of
nations.
Cities assumed a decisive epidemiological role, being magnets for
pathogens no less than people. Until the nineteenth century, towns were
so insanitary that their populations never replaced themselves by
reproduction, multiplying only thanks to the influx of rural surpluses who
were tragically infection-prone. In this challenge and response process,
sturdy urban survivors turned into an immunological elite – a virulently
infectious swarm perilous to less seasoned incomers, confirming the
notoriety of towns as death-traps.
The Old Testament records the epidemics the Lord hurled upon the
Egypt of the pharaohs, and from Greek times historians noted their
melancholy toll. The Peloponnesian War of 431 to 404 BC, the ‘world
war’ between Athens and Sparta, spotlights the traffic in pestilence that
came with civilization. Before that war the Greeks had suffered from
malaria and probably tuberculosis, diphtheria and influenza, but they had
been spared truly calamitous plagues. Reputedly beginning in Africa and
spreading to Persia, an unknown epidemic hit Greece in 430 BC, and its
impact on Athens was portrayed by Thucydides (460 – after 404 BC).
Victims were poleaxed by headaches, coughing, vomiting, chest pains
and convulsions. Their bodies became reddish or livid, with blisters and
ulcers; the malady often descended into the bowels before death spared
sufferers further misery. The Greek historian thought it killed a quarter of
the Athenian troops, persisting on the mainland for a farther four years
and annihilating a similar proportion of the population.
What was it? Smallpox, plague, measles, typhus, ergotism and even
syphilis have been proposed in a parlour game played by epidemiologists.
Whatever it was, by killing or immunizing them, it destroyed the Greeks’
ability to host it and, proving too virulent for its own good, the disease
disappeared. With it passed the great age of Athens. Most early nations
probably experienced such disasters, but Greece alone had a Thucydides
to record it.
Epidemics worsened with the rise of Rome. With victories in
Macedonia and Greece (146 BC), Persia (64 BC) and finally Egypt (30 BC),
the Roman legions vanquished much of the known world, but deadly
pathogens were thus given free passage around the empire, spreading to
the Eternal City itself. The first serious outbreak, the so-called Antonine
plague (probably smallpox which had smouldered in Africa or Asia
before being brought back from the Near East by Roman troops) slew a
quarter of the inhabitants in stricken areas between AD 165 and 180, some
five million people in all. A second, between AD 211 and 266, reportedly
destroyed some 5,000 a day in Rome at its height, while scourging the
countryside as well. The virulence was immense because populations had
no resistance. Smallpox and measles had joined the Mediterranean
epidemiological melting-pot, alongside the endemic malaria.
Wherever it struck a virgin population, measles too proved lethal.
There are some recent and well-documented instances of such strikes. In
his Observations Made During the Epidemic of Measles on the Faroe
Islands in the Year 1846, Peter Panum (1820–85) reported how measles
had attacked about 6,100 out of 7,864 inhabitants on a remote island
which had been completely free of the disease for sixty-five years. In the
nineteenth century, high mortality was also reported in measles
epidemics occurring in virgin soil populations (‘island laboratories’) in
the Pacific Ocean: 40,000 deaths in a population of 150,000 in Hawaii in
1848, 20,000 (perhaps a quarter of the population) on Fiji in 1874.
Improving communications also widened disease basins in the Middle
East, the Indian subcontinent, South Asia and the Far East. Take Japan:
before AD 552, the archipelago had apparently escaped the epidemics
blighting the Chinese mainland. In that year, Buddhist missionaries
visited the Japanese court, and shortly afterwards smallpox broke out. In
585 there was a further eruption of either smallpox or measles. Following
centuries brought waves of epidemics every three or four years, the most
significant being smallpox, measles, influenza, mumps and dysentery.
This alteration of occasional epidemic diseases into endemic ones
typical of childhood – it mirrors the domestication of animals –
represents a crucial stage in disease ecology. Cities buffeted by lethal
epidemics which killed or immunized so many that the pathogens
themselves disappeared for lack of hosts, eventually became big enough
to house sufficient non-immune individuals to retain the diseases
permanently; for this an annual case total of something in the region of
5,000–40,000 may be necessary. Measles, smallpox and chickenpox
turned into childhood ailments which affected the young less severely
and conferred immunity to future attacks.
The process marks an epidemiological watershed. Through such
evolutionary adaptations – epidemic diseases turning endemic –
expanding populations accommodated and surmounted certain oncelethal pestilences. Yet they remained exposed to other dire infections,
against which humans were to continue immunologically defenceless,
because they were essentially diseases not of humans but of animals. One
such is bubonic plague, which has struck humans with appalling ferocity
whenever populations have been caught up in a disease net involving rats,
fleas and the plague bacillus (Yersinia pestis ). Diseases like plague,
malaria, yellow fever, and others with animal reservoirs are uniquely
difficult to control.
PLAGUE
Bubonic plague is basically a rodent disease. It strikes humans when
infected fleas, failing to find a living rat once a rat host has been killed,
pick a human instead. When the flea bites its new host, the bacillus enters
the bloodstream. Filtered through the nearest lymph node, it leads to the
characteristic swelling (bubo) in the neck, groin or armpit. Bubonic
plague rapidly kills about two-thirds of those infected. There are two
other even more fatal forms: septicaemic and, deadliest of all, pneumonic
plague, which doesn’t even need an insect vector, spreading from person
to person directly via the breath.
The first documented bubonic plague outbreak occurred, predictably
enough, in the Roman empire. The plague of Justinian originated in Egypt
i n AD 540; two years later it devastated Constantinople, going on to
massacre up to a quarter of the eastern Mediterranean population, before
spreading to western Europe and ricocheting around the Mediterranean
for the next two centuries. Panic, disorder and murder reigned in the
streets of Constantinople, wrote the historian Procopius: up to 10,000
people died each day, until there was no place to put the corpses. When
this bout of plague ended, 40 per cent of the city’s population were dead.
It was a subsequent plague cycle, however, which made the greatest
impact. Towards 1300 the Black Death began to rampage through Asia
before sweeping westwards through the Middle East to North Africa and
Europe. Between 1346 and 1350 Europe alone lost perhaps twenty million
to the disease. And this pandemic was just the first wave of a bubonic
pestilence that raged until about 1800 (see Chapter 5).
Trade, war and empire have always sped disease transmission between
populations, a dramatic instance being offered by early modern Spain.
The cosmopolitan Iberians became subjects of a natural Darwinian
experiment, for their Atlantic and Mediterranean seaports served as
clearing-houses for swarms of diseases converging from Africa, Asia and
the Americas. Survival in this hazardous environment necessitated
becoming hyper-immune, weathering a hail of childhood diseases –
smallpox, measles, diphtheria and the like, gastrointestinal infections and
other afflictions rare today in the West. The Spanish conquistadores who
invaded the Americas were, by consequence, immunological supermen,
infinitely more deadly than ‘typhoid Mary’; disease gave them a fatal
superiority over the defenceless native populations they invaded.
TYPHUS
Though the Black Death ebbed away from Europe, war and the
movements of migrants ensured that epidemic disease did not go away,
and Spain, as one of the great crossroads, formed a flashpoint of disease.
Late in 1489, in its assault on Granada, Islam’s last Iberian stronghold,
Spain hired some mercenaries who had lately been in Cyprus fighting the
Ottomans. Soon after their arrival, Spanish troops began to go down with
a disease never before encountered and possessing the brute virulence
typical of new infections: typhus. It had probably emerged in the Near
East during the Crusades before entering Europe where Christian and
Muslim armies clashed.
It began with headache, rash and high fever, swelling and darkening of
the face; next came delirium and the stupor giving the disease its name –
typhos is Greek for ‘smoke’. Inflammation led to gangrene that rotted
fingers and toes, causing a hideous stench. Spain lost 3,000 soldiers in the
siege but six times as many to typhus.
Having smuggled itself into Spain, typhus filtered into France and
beyond. In 1528, with the Valois (French) and Habsburg (Spanish)
dynasties vying for European mastery, it struck the French army
encircling Naples; half the 28,000 troops died within a month, and the
siege collapsed. As a result, Emperor Charles V of Spain was left master
of Italy, controlling Pope Clement VII – with important implications for
Henry VIII’s marital troubles and the Reformation in England.
With the Holy Roman Empire fighting the Turks in the Balkans, typhus
gained a second bridgehead into Europe. In 1542, the disease killed
30,000 Christian soldiers on the eastern front; four years later, it struck
the Ottomans, terminating their siege of Belgrade; while by 1566 the
Emperor Maximilian II had so many typhus victims that he was driven to
an armistice. His disbanded troops relayed the disease back to western
Europe, and so to the New World, where it joined measles and smallpox
in ravaging Mexico and Peru. Typhus subsequently smote Europe during
the Thirty Years War (1618–48), and remained widespread, devastating
armies as ‘camp fever’, dogging beggars (road fever), depleting jails (jail
fever) and ships (ship fever).
It was typhus which joined General Winter to turn Napoleon’s Russian
invasion into a rout. The French crossed into Russia in June 1812.
Sickness set in after the fall of Smolensk. Napoleon reached Moscow in
September to find the city abandoned. During the next five weeks, the
grande armée suffered a major typhus epidemic. By the time Moscow
was evacuated, tens of thousands had fallen sick, and those unfit to travel
were abandoned. Thirty thousand cases were left to die in Vilna alone,
and only a trickle finally reached Warsaw. Of the 600,000 men in
Napoleon’s army, few returned, and typhus was a major reason.
Smallpox, plague and typhus indicate how war and conquest paved the
way for the progress of pathogens. A later addition, at least as far as the
West was concerned, was cholera, the most spectacular ‘new’ disease of
the nineteenth century.
COLONIZATION AND INDUSTRIALIZATION
Together with civilization and commerce, colonization has contributed to
the dissemination of infections. The Spanish conquest of America has
already been mentioned; the nineteenth-century scramble for Africa also
caused massive disturbance of indigenous populations and environmental
disruption, unleashing terrible epidemics of sleeping sickness and other
maladies. Europeans exported tuberculosis to the ‘Dark Continent’,
especially once native labourers were jammed into mining compounds
and the slums of Johannesburg. In the gold, diamond and copper
producing regions of Africa, the operations of mining companies like De
Beers and Union Minière de Haute Katanga brought family disruption
and prostitution. Capitalism worsened the incidence of infectious and
deficiency diseases for those induced or forced to abandon tribal ways
and traditional economies – something which medical missionaries were
pointing out from early in the twentieth century.
While in the period after Columbus’s voyage, advances in agriculture,
plant-breeding and crop exchange between the New and Old Worlds in
some ways improved food supply, for those newly dependent upon a
single staple crop the consequence could be one of the classic deficiency
diseases: scurvy, beriberi or kwashiorkor (from a Ghanaian word
meaning a disease suffered by a child displaced from the breast). Those
heavily reliant on maize in Mesoamerica and later, after it was brought
back by the conquistadores, in the Mediterranean, frequently fell victim
to pellagra, caused by niacin deficiency and characterized by diarrhoea,
dermatitis, dementia and death. Another product of vitamin B (thiamine)
deficiency is beriberi, associated with Asian rice cultures.
The Third World, however, has had no monopoly on dearth and
deficiency diseases. The subjugation of Ireland by the English, complete
around 1700, left an impoverished native peasantry ‘living in Filth and
Nastiness upon Butter-milk and Potatoes, without a Shoe or stocking to
their Feet’, as Jonathan Swift observed. Peasants survived through
cultivating the potato, a New World import and another instance of how
the Old World banked upon gains from the New. A wonderful source of
nutrition, rich in vitamins B1, B2 and C as well as a host of essential
minerals, potatoes kept the poor alive and well-nourished, but when in
1727 the oat crop failed, the poor ate their winter potatoes early and then
starved. The subsequent famine led Swift to make his ironic ‘modest
proposal’ as to how to handle the island’s surplus population better in
future:
a young healthy Child, well nursed is, at a Year old, a most
delicious, nourishing and wholesome Food; whether Stewed,
Roasted, Baked, or Boiled; and, I make no doubt, that it will
equally serve in a Fricassee, or Ragout. . . I grant this Food will be
somewhat dear, and therefore very proper for Landlords.
With Ireland’s population zooming, disaster was always a risk. From a
base of two million potato-eating peasants in 1700, the nation multiplied
to five million by 1800 and to close on nine million by 1845. The potato
island had become one of the world’s most densely populated places.
When the oat and potato crops failed, starving peasants became prey to
various disorders, notably typhus, predictably called ‘Irish fever’ by the
landlords. During the Great Famine of 1845–7, typhus worked its way
through the island; scurvy and dysentery also returned. Starving children
aged so that they looked like old men. Around a million people may have
died in the famine and in the next decades millions more emigrated. Only
a small percentage of deaths were due directly to starvation; the
overwhelming majority occurred from hunger-related disease: typhus,
relapsing fevers and dysentery.
The staple crops introduced by peasant agriculture and commercial
farming thus proved mixed blessings, enabling larger numbers to survive
but often with their immunological stamina compromised. There may
have been a similar trade-off respecting the impact of the Industrial
Revolution, first in Europe, then globally. While facilitating population
growth and greater (if unequally distributed) prosperity, industrialization
spread insanitary living conditions, workplace illnesses and ‘new
diseases’ like rickets. And even prosperity has had its price, as Cheyne
suggested. Cancer, obesity, gallstones, coronary heart disease,
hypertension, diabetes, emphysema, Alzheimer’s disease and many other
chronic and degenerative conditions have grown rapidly among today’s
wealthy nations. More are of course now living long enough to develop
these conditions, but new lifestyles also play their part, with cigarettes,
alcohol, fatty diets and narcotics, those hallmarks of life in the West,
taking their toll. Up to one third of all premature deaths in the West are
said to be tobacco-related; in this, as in so many other matters, parts of
the Third World are catching up fast.
And all the time ‘new’ diseases still make their appearance, either as
evolutionary mutations or as ‘old’ diseases flushed out of their local
environments (their very own Pandora’s box) and loosed upon the wider
world as a result of environmental disturbance and economic change. The
spread of AIDS, Ebola, Lassa and Marburg fevers may all be the result of
the impact of the West on the ‘developing’ world – legacies of
colonialism.
Not long ago medicine’s triumph over disease was taken for granted.
At the close of the Second World War a sequence of books appeared in
Britain under the masthead of ‘The Conquest Series’. These included The
Conquest of Disease, The Conquest of Pain, The Conquest of
Tuberculosis, The Conquest of Cancer, The Conquest of the Unknown and
The Conquest of Brain Mysteries, and they celebrated ‘the many wonders
of contemporary medical science today’. And this was before the further
‘wonder’ advances introduced after 1950, from tranquillizers to
transplant surgery. A signal event was the world-wide eradication of
smallpox in 1977.
In spite of such advances, expectations of a conclusive victory over
disease should always have seemed naive since that would fly in the face
of a key axiom of Darwinian biology: ceaseless evolutionary adaptation.
And that is something disease accomplishes far better than humans, since
it possesses the initiative. In such circumstances it is hardly surprising
that medicine has proved feeble against AIDS, because the human
immunodeficiency virus (HIV) mutates rapidly, frustrating the
development of vaccines and antiviral drugs.
The systematic impoverishment of much of the Third World, the
disruption following the collapse of communism, and the rebirth of an
underclass in the First World resulting from the free-market economic
policies dominant since the 1980s, have all assisted the resurgence of
disease. In March 1997 the chairman of the British Medical Association
warned that Britain was slipping back into the nineteenth century in terms
of public health. Despite dazzling medical advances, world health
prospects at the close of the twentieth century seem much gloomier than
half a century ago.
The symbiosis of disease with society, the dialectic of challenge and
adaptation, success and failure, sets the scene for the following
discussion of medicine. From around 2000 BC, medical ideas and
remedies were written down. That act of recording did not merely make
early healing accessible to us; it transformed medicine itself. But there is
more to medicine than the written record, and the remainder of this
chapter addresses wider aspects of healing – customary beliefs about
illness and the body, the self and society – and glances at medical beliefs
and practices before and beyond the literate tradition.
MAKING SENSE OF SICKNESS
Though prehistoric hunting and gathering groups largely escaped
epidemics, individuals got sick. Comparison with similar groups today,
for instance the Kalahari bush people, suggests they would have managed
their health collectively, without experts. A case of illness or debility
directly affected the well-being of the band: a sick or lame person is a
serious handicap to a group on the move; hence healing rituals or
treatment would be a public matter rather than (as Western medicine has
come to see them) private.
Anthropologists sometimes posit two contrasting ‘sick roles’: one in
which the sick person is treated as a child, fed and protected during
illness or incapacity; the other in which the sufferer either leaves the
group or is abandoned or, as with lepers in medieval Europe, ritually
expelled, becoming culturally ‘dead’ before they are biologically dead.
Hunter-gatherer bands were more likely to abandon their sick than to
succour them.
With population rise, agriculture, and the emergence of epidemics, new
medical beliefs and practices arose, reflecting growing economic,
political and social complexities. Communities developed hierarchical
systems, identified by wealth, power and prestige. With an emergent
division of labour, medical expertise became the métier of particular
individuals. Although the family remained the first line of defence
against illness, it was bolstered by medicine men, diviners, witchsmellers and shamans, and in due course by herbalists, birth-attendants,
bone-setters, barber-surgeons and healer-priests. When that first
happened we cannot be sure. Cave paintings found in France, some
17,000 years old, contain images of men masked in animal heads,
performing ritual dances; these may be the oldest surviving images of
medicine-men.
Highly distinctive was the shaman. On first encountering such folk
healers, westerners denounced them as impostors. In 1763 the Scottish
surgeon John Bell (1691–1780) described the ‘charming sessions’ he
witnessed in southern Siberia:
[the shaman] turned and distorted his body into many different
postures, till, at last, he wrought himself up to such a degree of fury
that he foamed at the mouth, and his eyes looked red and staring. He
now started up on his legs, and fell a dancing, like one distracted, till
he trod out the fire with his bare feet.
These unnatural motions were, by the vulgar, attributed to the
operations of a divinity. . . He now performed several legerdemain
tricks; such as stabbing himself with a knife, and bringing it up at
his mouth, running himself through with a sword and many others
too trifling to mention.
This Calvinist Scot was not going to be taken in by Asiatic savages:
‘nothing is more evident than that these shamans are a parcel of jugglers,
who impose on the ignorant and credulous vulgar.’ Such a reaction is
arrogantly ethnocentric: although shamans perform magical acts,
including deliberate deceptions, they are neither fakes nor mad. Common
in native American culture as well as Asia, the shaman combined the
roles of healer, sorcerer, seer, educator and priest, and was believed to
possess god-given powers to heal the sick and to ensure fertility, a good
harvest or a successful hunt. His main healing techniques have been
categorized as contagious magic (destruction of enemies, through such
means as the use of effigies) and direct magic, involving rituals to
prevent disease, fetishes, amulets (to protect against black magic), and
talismans (for good luck).
In 1912 Sir Baldwin Spencer (1860–1929) and F.J. Gillen (1856–1912)
described the practices of the aborigine medicine-man in Central
Australia:
In ordinary cases the patient lies down, while the medicine man
bends over him and sucks vigorously at the part of the body
affected, spitting out every now and then pieces of wood, bone or
stone, the presence of which is believed to be causing the injury
and pain. This suction is one of the most characteristic features of
native medical treatment, as pain in any part of the body is always
attributed to the presence of some foreign body that must be
removed.
Stone-sucking is a symbolic act. As the foreign body had been introduced
into the body of the sick man by a magical route, it had to be removed in
like manner. For the medicine-man, the foreign body in his mouth
attracts the foreign body in the patient.
As such specialist healers emerged, and as labour power grew more
valuable in structured agricultural and commercial societies, the
appropriate ‘sick role’ shifted from abandonment to one modelled on
child care. The exhausting physical labour required of farm workers
encouraged medicines that would give strength; hence, together with
drugs to relieve fevers, dysentery and pain, demand grew for stimulants
and tonics such as tobacco, coca, opium and alcohol.
In hierarchical societies like Assyria or the Egypt of the pharaohs, with
their military-political elites, illness became unequally distributed and
thus the subject of moral, religious and political teachings and judgments.
Its meanings needed to be explained. Social stratification meanwhile
offered fresh scope for enterprising healers; demand for medicines grew;
social development created new forms of healing as well as of faith,
ritual and worship; sickness needed to be rationalized and theorized. In
short, with settlement and literacy, conditions were ripe for the
development of medicine as a belief-system and an occupation.
APPROACHES TO HEALING
Like earthquakes, floods, droughts and other natural disasters, illness
colours experiences, outlooks and feelings. It produces pain, suffering
and fear, threatens the individual and the community, and raises the
spectre of that mystery of mysteries – death. Small wonder impassioned
and contested responses to sickness have emerged: notions of blame and
shame, appeasement and propitiation, and teachings about care and
therapeutics. Since sickness raises profound anxieties, medicine develops
alongside religion, magic and social ritual. Nor is this true only of
‘primitive’ societies; from Job to the novels of Thomas Mann, the
experience of sickness, ageing and death shapes the sense of the self and
the human condition at large. AIDS has reminded us (were we in danger
of forgetting) of the poignancy of sickness in the heyday of life.
Different sorts of sickness beliefs took shape. Medical ethnologists
commonly suggest a basic divide: natural causation theories, which view
illness as a result of ordinary activities that have gone wrong – for
example, the effects of climate, hunger, fatigue, accidents, wounds or
parasites; and personal or supernatural causation beliefs, which regard
illness as harm wreaked by a human or superhuman agency. Typically,
the latter is deliberately inflicted (as by a sorcerer) through magical
devices, words or rituals; but it may be unintentional, arising out of an
innate capacity for evil, such as that possessed by witches. Pollution from
an ‘unclean’ person may thus produce illness – commonly a corpse or a
menstruating woman. Early beliefs ascribed special prominence to social
or supernatural causes; illness was thus injury, and was linked with
aggression.
This book focuses mostly upon the naturalistic notions of disease
developed by and since the Greeks, but mention should be made of the
supernatural ideas prominent in non-literate societies and present
elsewhere. Such ideas are often subdivided by scholars into three
categories: mystical, in which illness is the automatic consequence of an
act or experience; animistic, in which the illness-causing agent is a
personal supernatural being; and magical, where a malicious human
being uses secret means to make someone sick. The distribution of these
beliefs varies. Africa abounds in theories of mystical retribution, in
which broken taboos are to blame; ancestors are commonly blamed for
sickness. Witchcraft, the evil eye and divine retribution are frequently
used to explain illness in India, as they were in educated Europe up to the
seventeenth century, and in peasant parts beyond that time.
Animistic or volitional illness theories take various forms. Some
blame objects for illness – articles which are taboo, polluting or
dangerous, like the planets within astrology. Other beliefs blame people –
sorcerers or witches. Sorcerers are commonly thought to have shot some
illness-causing object into the victim, thus enabling healers to ‘extract’ it
via spectacular rituals. The search for a witch may involve divination or
public witch-hunts, with cathartic consequences for the community and
calamity for the scapegoat, who may be punished or killed. Under such
conditions, illness plays a key part in a community’s collective life,
liable to disrupt it and lead to persecutions, in which witchfinders and
medicine men assume a key role.
There are also systems that hinge on spirits – and the recovery of lost
souls. The spirits of the dead, or nature spirits like wood demons, are
believed to attack the sick; or the patient’s own soul may go missing. By
contrast to witchcraft, these notions of indirect causation allow for more
nuanced explanations of the social troubles believed to cause illness;
there need be no single scapegoat, and purification may be more general.
Shamanistic healers will use their familiarity with worlds beyond to
grasp through divination the invisible causes behind illness. Some groups
use divining apparatus – shells, bones or entrails; a question will be put to
an oracle and its answer interpreted. Other techniques draw on possession
or trance to fathom the cause of sickness.
Responses to sickness may take many forms. They may simply involve
the sick person hiding away on his own, debasing himself with dirt and
awaiting his fate. More active therapies embrace two main techniques –
herbs and rituals. Medicines are either tonics to strengthen the patient or
‘poisons’ to drive off the aggressor. Choice of the right herbal remedy
depends on the symbolic properties of the plant and on its empirical
effects. Some are chosen for their material properties, others for their
colour, shape or resonances within broader webs of symbolic meaning.
But if herbs may be symbolic, they may also be effective; after much
pooh-poohing of ‘primitive medicine’, pharmacologists studying
ethnobotany now acknowledge that such lore provided healers with
effective analgesics, anaesthetics, emetics, purgatives, diuretics,
narcotics, cathartics, febrifuges, contraceptives and abortifacients. From
the herbs traditionally in use, modern medicine has derived such
substances as salicylic acid, ipecac, quinine, cocaine, colchicine,
ephedrine, digitalis, ergot, and other drugs besides.
Medicines are not necessarily taken only by the patient, for therapy is
communal and in traditional healing it is the community that is being put
to rights, the patient being simply the stand-in. Certain healing rituals are
rites de passage, with phases of casting out and reincorporation; others
are dramas; and often the patient is being freed from unseen forces
(exorcism). Some rituals wash a person clean; others use smoke to drive
harm out. A related approach, Dreckapotheke, involves dosing the patient
with disgusting decoctions or fumigations featuring excrement, noxious
insects, and so forth, which drive the demons away.
A great variety of healing methods employ roots and leaves in
elaborate magical rituals, and all communities practise surgery of some
sort. Many tribes have used skin scarifications as a form of protection.
Other kinds of body decoration, clitoridectomies and circumcision are
common (circumcision was performed in Egypt from around 2000 BC).
To combat bleeding, traditional surgeons used tourniquets or
cauterization, or packed the wound with absorbent materials and
bandaged it. The Masai in East Africa amputate fractured limbs, but
medical amputation has been rare. There is archaeological evidence,
however, from as far apart as France, South America and the Pacific that
as early as 5000 BC trephining was performed, which involved cutting a
small hole in the skull. Flint cutting tools were used to scrape away
portions of the cranium, presumably to deliver sufferers from some devil
tormenting the soul. Much skill was required and callous formations on
the edges of the bony hole show that many of the patients survived.
BODY LORE
Illness is thus not just biological but social, and concepts of the body and
its sicknesses draw upon powerful dichotomies: nature and culture, the
sacred and the profane, the raw and the cooked. Body concepts
incorporate beliefs about the body politic at large; communities with
rigid caste and rank systems thus tend to prescribe rigid rules about
bodily comportment. What is considered normal health and what
constitutes sickness or impairment are negotiable, and the conventions
vary from community to community and within subdivisions of societies,
dependent upon class, gender and other factors. Maladies carry different
moral charges. ‘Sick roles’ may range from utter stigmatization
(common with leprosy, because it is so disfiguring) to the notion that the
sick person is special or semi-sacred (the holy fool or the divine
epileptic). An ailment can be a rite de passage, a childhood illness an
essential preliminary to entry into adulthood.
Death affords a good instance of the scope for different interpretations
in the light of different criteria. The nature of ‘physical’ death is highly
negotiable; in recent times western tests have shifted from cessation of
spontaneous breathing to ‘brain death’. This involves more than the
matter of a truer definition: it corresponds with western values (which
prize the brain) and squares with the capacities of hospital technology.
Some cultures think of death as a sudden happening, others regard dying
as a process advancing from the moment of birth and continuing beyond
the grave. Bodies are thus languages as well as envelopes of flesh; and
sick bodies have eloquent messages for society.
It became common wisdom in the West from around 1800 that the
medicine of orientals and ‘savages’ was mere mumbo-jumbo, and had to
be superseded. Medical missions moved into the colonies alongside their
religious brethren, followed in due course by the massive health
programmes of the modern international aid organizations. By all such
means Europeans and Americans sought to stamp out indigenous
practices and beliefs, from the African witchdoctors and spirit mediums
to the vaidyas and hakims of Hindu and Islamic medicine in Asia. Native
practices were grounded in superstition and were perilous to boot;
colonial authorities moved in to prohibit practices and cults which they
saw as medically, religiously or politically objectionable, thereby
becoming arbiters of ‘good’ and ‘bad’ medicine. Western medicine grew
aggressive, convinced of its unique scientific basis and superior
therapeutic powers.
This paralleled prejudices developing towards folk or religious
medicine within Europe itself. The sixteenth-century French physician
Laurent Joubert (1529–83) wrote a huge tome exposing ‘common
fallacies’. Erreurs populaires [1578] systematically denounced the
‘vulgar errors’ and erroneous sayings of popular medicine regarding
pregnancy, childbirth, lying-in, infant care, children’s diseases and so
forth, insisting that ‘such errors can be most harmful to man’s health and
even his life’. ‘Sometimes babies, boys as well as girls, are born with red
marks on their faces, necks, shoulders or other parts of the body,’ Joubert
noted. ‘It is said that this is because they were conceived while their
mother had her period … But I believe that it is impossible that a woman
should conceive during her menstrual flow.’ Another superstition was
that whatever was imprinted upon the imagination of the mother at the
time of conception would leave a mark on the body of her baby.
Elite medicine sought to discredit health folklore, but popular
medicine has by no means always been misguided or erroneous. Recent
pharmacological investigations have demonstrated the efficacy of many
traditional cures. It is now known, for instance, that numerous herbal
decoctions – involving rue, savin, wormwood, pennyroyal and juniper –
traditionally used by women to regulate fertility have some efficacy.
Today’s ‘green pharmacy’ aims at the recovery of ancient popular
medical lore, putting it to the scientific test.
Once popular medicine had effectively been defeated and no longer
posed a threat, scholarly interest in it grew, and great collections of
‘medical folklore’ and ‘medical magic’, stressing their quaintness, were
published in the nineteenth century. But it is a gross mistake to view folk
medicine as a sack of bizarre beliefs and weird and wonderful remedies.
Popular medicine is based upon coherent conceptions of the body and of
nature, rooted in rural society. Different body parts are generally
represented as linked to the cosmos; health is conceived as a state of
precarious equilibrium among components in a fluid system of relations;
and healing mainly consists of re-establishing this balance when lost.
Such medical beliefs depend on notions of opposites and similars. For
example, to stop a headache judged to emanate from excessive heat, cold
baths to the feet might be recommended; or to cure sciatica, an incision
to the ear might be made on the side opposite to the pain.
Traditional medicine views the body as the centre or the epitome of the
universe, with manifold sympathies linking mankind and the natural
environment. Analogy and signatures are recurrent organizing principles
in popular medicine. By their properties (colour, form, smell, heat,
humidity, and so on) the elements of nature signal their meaningful
associations with the human body, well and sick. For instance, in most
traditional medicine systems, red is used to cure disorders connected with
blood; geranium or oil of St John’s wort are used against cuts.
Yellow plants such as saffron crocus ( Crocus sativus) were chosen for
jaundice, or the white spots on the leaves of lungwort (Pulmonaria
officinalis) showed that the plant was good for lung disease, and so on.
Sometimes it was argued that remedies had been put in places convenient
for people to use. So, in England, the bark of the white willow (Salix
alba) was valued for agues, because the tree grows in moist or wet soil,
where agues chiefly abound, as the Revd Edmund Stone, of Chipping
Norton in Oxfordshire, observed in his report to the Royal Society of
London in 1763:
the general maxim, that many natural maladies carry their cures
along with them, or that their remedies lie not far from their
causes, was so very apposite to this particular case, that I could
not help applying it; and that this might be the intention of
Providence here, I must own had some little weight with me.
Maintaining health required understanding one’s body. This was both a
simple matter (pain was directly experienced) and appallingly difficult,
for the body’s interior was hidden. Unable to peer inside, popular wisdom
relied upon analogy, drawing inferences from the natural world.
Domestic life gave clues for body processes – food simmering on the hob
became a natural symbol for its processing in the stomach – while magic,
folksong and fable explained how conception and birth, growth, decay
and death mirrored the seedtime and the harvest. The landscape contained
natural signs: thus peasant women made fertility shrines out of springs.
To fathom abnormalities and heal ailments, countryfolk drew upon the
suggestive qualities of strange creatures like toads and snakes (their
distinctive habits like hibernation or shedding skins implied a special
command over life and death), and also the evocative profiles of
landscape features like valleys and caves, while the phases of the moon
so obviously correlated with the menstrual cycle.
Nature prompted the idea that the healthy body had to flow. In an
agrarian society preoccupied with the weather and with the changes of the
seasons, the systems operating beneath the skin were intuitively
understood as fluid: digestion, fertilization, growth, expulsion. Not
structures but processes counted. In vernacular and learned medicine
alike, maladies were thought to migrate round the body, probing weak
spots and, like marauding bands, most perilous when they targeted central
zones. Therapeutics, it was argued, should counter-attack by forcing or
luring ailments to the extremities, like the feet, where they might be
expelled as blood, pus or scabs. In such a way of seeing, a gouty foot
might even be a sign of health, since the big toe typically afflicted was an
extremity far distant from the vital organs: a foe in the toe was trouble
made to keep its distance.
In traditional medicine, as I have said, health is a state of precarious
balance – being threatened, toppled and restored – between the body, the
universe and society. More important than curing is the aim of preventing
imbalance from occurring in the first place. Equilibrium is to be achieved
by avoiding excess and pursuing moderation. Prevention lies in living in
accord with nature, in harmony with the seasons and elements and the
supernatural powers that haunt the landscape: purge the body in spring to
clean it of corrupt humours, in summer avoid activities or foods which
are too heating. Another preventative is good diet – an idea encapsulated
in the later advice, ‘an apple a day keeps the doctor away’. Foods should
be consumed which give strength and assimilate natural products which,
resembling the body, are beneficial to it, such as wine and red meat:
‘meat makes flesh and wine makes blood’, runs a French proverb. The
idea that life is in the blood is an old one. ‘Epileptic patients are in the
habit of drinking the blood even of gladiators,’ rioted the Roman author
Pliny (AD C. 23–79): ‘these persons, forsooth, consider it a most effectual
cure for their disease, to quaff the warm, breathing, blood from man
himself, and, as they apply their mouth to the wound, to draw forth his
very life.’
Clear-cut distinctions have frequently been drawn between ‘science’
and ‘superstition’ but, as historians of popular culture today insist, in
societies with both a popular and an elite tradition (high and low, or
learned and oral cultures), there has always been complex two-way
cultural traffic in knowledge, or more properly a continuum. While often
aloof and dismissive, professional medicine has borrowed extensively
from the folk tradition.
Take, for instance, smallpox inoculation. There had long been some
folk awareness in Europe of the immunizing properties of a dose of
smallpox, but it was not until around 1700 that this knowledge was turned
...
Purchase answer to see full
attachment