Feminist Equity Studies and Feminist Analyses Questions

User Generated

Knivre_20

Writing

Description

Please answer each section :)

prompt: Both feminist equity studies and feminist analyses of the uses of language/rhetoric and imagery identify gendered disparities in science. In their own ways, both equity studies and studies of language and imagery contend that science has been and continues to be gendered.

First, please describe the field of feminist equity studies in relation to STEM. What do equity studies, well, study? 

What are some of the typical findings of feminist equity studies - in general and with respect to STEM specifically? What are some of the historical and ongoing patterns identified in feminist equity studies? 

What are some of the potential sources/reasons that might explain the findings in feminist equity studies in relation to STEM? What variables might cause or be correlated with gender disparities in STEM?

Next, please describe the role (historically and/or currently) of gendered language and imagery in relation to science, technology, and medicine. When analysts of language and imagery contend that ‘science is gendered,’ what does this mean? 

Simply put: How is science gendered in terms of language and imagery?

Unformatted Attachment Preview

An Introduction to Science and Technology Studies Second Edition Sergio Sismondo Praise for the first edition “This book is a wonderful tool with which to think. It offers an expansive introduction to the field of science studies, a rich exploration of the theoretical terrains it comprises and a sheaf of well-reasoned opinions that will surely inspire argument.” Geoffrey C. Bowker, University of California, San Diego “Sismondo’s Introduction to Science and Technology Studies, . . . for anyone of whatever age and background starting out in STS, must be the first-choice primer: a resourceful, enriching book that will speak to many of the successes, challenges, and as-yet-untackled problems of science studies. If the introductory STS course you teach does not fit his book, change your course.” Jane Gregory, ISIS, 2007 An Introduction to Science and Technology Studies Second Edition Sergio Sismondo This second edition first published 2010 © 2010 Sergio Sismondo Edition history: Blackwell Publishing Ltd (1e, 2004) Blackwell Publishing was acquired by John Wiley & Sons in February 2007. Blackwell’s publishing program has been merged with Wiley’s global Scientific, Technical, and Medical business to form Wiley-Blackwell. Registered Office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom Editorial Offices 350 Main Street, Malden, MA 02148-5020, USA 9600 Garsington Road, Oxford, OX4 2DQ, UK The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, UK For details of our global editorial offices, for customer services, and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com/wiley-blackwell. The right of Sergio Sismondo to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Library of Congress Cataloging-in-Publication Data Sismondo, Sergio. An introduction to science and technology studies / Sergio Sismondo. – 2nd ed. p. cm. Includes bibliographical references and index. ISBN 978-1-4051-8765-7 (pbk. : alk. paper) 1. Science–Philosophy. 2. Science–Social aspects. 3. Technology–Philosophy. 4. Technology–Social aspects. I. Title. Q175.S5734 2010 501–dc22 2009012001 A catalogue record for this book is available from the British Library. Set in 10/12.5pt Galliard by Graphicraft Limited, Hong Kong Printed in Singapore 1 2010 Contents Preface vii 1 The Prehistory of Science and Technology Studies 1 2 The Kuhnian Revolution 12 3 Questioning Functionalism in the Sociology of Science 23 4 Stratification and Discrimination 36 5 The Strong Programme and the Sociology of Knowledge 47 6 The Social Construction of Scientific and Technical Realities 57 7 Feminist Epistemologies of Science 72 8 Actor-Network Theory 81 9 Two Questions Concerning Technology 93 10 Studying Laboratories 106 11 Controversies 120 12 Standardization and Objectivity 136 13 Rhetoric and Discourse 148 14 The Unnaturalness of Science and Technology 157 vi Contents 15 The Public Understanding of Science 168 16 Expertise and Public Participation 180 17 Political Economies of Knowledge 189 References 205 Index 236 Preface Science & Technology Studies (STS) is a dynamic interdisciplinary field, rapidly becoming established in North America and Europe. The field is a result of the intersection of work by sociologists, historians, philosophers, anthropologists, and others studying the processes and outcomes of science, including medical science, and technology. Because it is interdisciplinary, the field is extraordinarily diverse and innovative in its approaches. Because it examines science and technology, its findings and debates have repercussions for almost every understanding of the modern world. This book surveys a group of terrains central to the field, terrains that a beginner in STS should know something about before moving on. For the most part, these are subjects that have been particularly productive in theoretical terms, even while other subjects may be of more immediate practical interest. The emphases of the book could have been different, but they could not have been very different while still being an introduction to central topics in STS. An Introduction to Science and Technology Studies should provide an overview of the field for any interested reader not too familiar with STS’s basic findings and ideas. The book might be used as the basis for an upperyear undergraduate, or perhaps graduate-level, course in STS. But it might also be used as part of a trajectory of more focused courses on, say, the social study of medicine, STS and the environment, reproductive technologies, science and the military, or science and public policy. Because anybody putting together such courses would know how those topics should be addressed – or certainly know better than does the author of this book – these topics are not addressed here. However the book is used, it should almost certainly be alongside a number of case studies, and probably alongside a few of the many articles mentioned in the book. The empirical examples here are not intended to viii Preface replace rich detailed cases, but only to draw out a few salient features. Case studies are the bread and butter of STS. Almost all insights in the field grow out of them, and researchers and students still turn to articles based on cases to learn central ideas and to puzzle through problems. The empirical examples used in this book point to a number of canonical and useful studies. There are many more among the references to other studies published in English, and a great many more in English and in other languages that are not mentioned. This second edition makes a number of changes. The largest is reflected in a tiny adjustment of abbreviation. In the first edition, the field’s name was abbreviated S&TS. The ampersand was supposed to emphasize the field’s name as Science and Technology Studies, rather than Science, Technology, and Society, the latter of which was generally known as STS in the 1970s and 1980s. When the ampersand seemed important, the two STSs differed considerably in their approaches and subject matters: Science and Technology Studies was a philosophically radical project of understanding science and technology as discursive, social, and material activities; Science, Technology, and Society was a project of understanding social issues linked to developments in science and technology, and how those developments could be harnessed to democratic and egalitarian ideals. When the first edition of this book was written, the ampersand seemed valuable to identifying its terrain. However, the fields of STS (with or without ampersand) have expanded so rapidly that the two STSs have blended together. The first STS (with ampersand) became increasingly concerned with issues about the legitimate places of expertise, about science in public spheres, about the place of public interests in scientific decision-making. The other STS (without) became increasingly concerned with understanding the dynamics of science, technology, and medicine. Thus, many of the most exciting works have joined what would once have been seen as separate. This edition, then, increases attention to work being done on the politics of science and technology, especially where STS treats those politics in more theoretical and general terms. As a result, the public understanding of science, democracy in science and technology, and political economies of knowledge each get their own chapters in this edition, expanding the scope of the book. Besides this large change, there is considerable updating of material from the first edition, and there are some reorganizations. In particular, the chapter on feminist epistemologies of science has been brought forward, to put it in better contact with the chapters on social constructivism and the strong programme. The four chapters on laboratories, controversies, objectivity, and creating order have been reorganized into three. Preface ix I hope that these additions and changes make the book more useful to students and teachers of STS than was the first. It is to all teachers and students in the field, and especially my own, that I dedicate this book. Sergio Sismondo 1 The Prehistory of Science and Technology Studies A View of Science Let us start with a common picture of science. It is a picture that coincides more or less with where studies of science stood some 50 years ago, that still dominates popular understandings of science, and even serves as something like a mythic framework for scientists themselves. It is not perfectly uniform, but instead includes a number of distinct elements and some healthy debates. It can, however, serve as an excellent foil for the discussions that follow. At the margins of science, and discussed in the next section, is technology, typically seen as simply the application of science. In this picture, science is a formal activity that creates and accumulates knowledge by directly confronting the natural world. That is, science makes progress because of its systematic method, and because that method allows the natural world to play a role in the evaluation of theories. While the scientific method may be somewhat flexible and broad, and therefore may not level all differences, it appears to have a certain consistency: different scientists should perform an experiment similarly; scientists should be able to agree on important questions and considerations; and most importantly, different scientists considering the same evidence should accept and reject the same hypotheses. The result is that scientists can agree on truths about the natural world. Within this snapshot, exactly how science is a formal activity is open. It is worth taking a closer look at some of the prominent views. We can start with philosophy of science. Two important philosophical approaches within the study of science have been logical positivism, initially associated with the Vienna Circle, and falsificationism, associated with Karl Popper. The Vienna Circle was a group of prominent philosophers and scientists who met in the early 1930s. The project of the Vienna Circle was to develop a philosophical understanding of science that would allow for an expansion 2 Prehistory of Science and Technology Studies of the scientific worldview – particularly into the social sciences and into philosophy itself. That project was immensely successful, because positivism was widely absorbed by scientists and non-scientists interested in increasing the rigor of their work. Interesting conceptual problems, however, caused positivism to become increasingly focused on issues within the philosophy of science, losing sight of the more general project with which the movement began (see Friedman 1999; Richardson 1998). Logical positivists maintain that the meaning of a scientific theory (and anything else) is exhausted by empirical and logical considerations of what would verify or falsify it. A scientific theory, then, is a condensed summary of possible observations. This is one way in which science can be seen as a formal activity: scientific theories are built up by the logical manipulation of observations (e.g. Ayer 1952 [1936]; Carnap 1952 [1928] ), and scientific progress consists in increasing the correctness, number, and range of potential observations that its theories indicate. For logical positivists, theories develop through a method that transforms individual data points into general statements. The process of creating scientific theories is therefore an inductive one. As a result, positivists tried to develop a logic of science that would make solid the inductive process of moving from individual facts to general claims. For example, scientists might be seen as creating frameworks in which it is possible to uniquely generalize from data (see Box 1.1). Positivism has immediate problems. First, if meanings are reduced to observations, there are many “synonyms,” in the form of theories or statements that look as though they should have very different meanings but do not make different predictions. For example, Copernican astronomy was initially designed to duplicate the (mostly successful) predictions of the earlier Ptolemaic system; in terms of observations, then, the two systems were roughly equivalent, but they clearly meant very different things, since one put the Earth in the center of the universe, and the other had the Earth spinning around the Sun. Second, many apparently meaningful claims are not systematically related to observations, because theories are often too abstract to be immediately cashed out in terms of data. Yet surely abstraction does not render a theory meaningless. Despite these problems and others, the positivist view of meaning taps into deep intuitions, and cannot be entirely dismissed. Even if one does not believe positivism’s ideas about meaning, many people are attracted to the strict relationship that it posits between theories and observations. Even if theories are not mere summaries of observations, they should be absolutely supported by them. The justification we have for believing a scientific theory is based on that theory’s solid connection Prehistory of Science and Technology Studies 3 Box 1.1 The problem of induction Among the asides inserted into the next few chapters are a number of versions of the “problem of induction.” These are valuable background for a number of issues in Science and Technology Studies (STS). At least as stated here, these are theoretical problems that only occasionally become practical ones in scientific and technical contexts. While they could be paralyzing in principle, in practice they do not come up. One aspect of their importance, then, is in finding out how scientists and engineers contain these problems, and when they fail at that, how they deal with them. The problem of induction arose with David Hume’s general questions about evidence in the eighteenth century. Unlike classical skeptics, Hume was interested not in challenging particular patterns of argument, but in showing the fallibility of arguments from experience in general. In the sense of Hume’s problem, induction extends data to cover new cases. To take a standard example, “the sun rises every 24 hours” is a claim supposedly established by induction over many instances, as each passing day has added another data point to the overwhelming evidence for it. Inductive arguments take n cases, and extend the pattern to the n+1st. But, says Hume, why should we believe this pattern? Could the n+1st case be different, no matter how large n is? It does no good to appeal to the regularity of nature, because the regularity of nature is at issue. Moreover, as Ludwig Wittgenstein (1958) and Nelson Goodman (1983 [1954]) show, nature could be perfectly regular and we would still have a problem of induction. This is because there are many possible ideas of what it would mean for the n+1st case to be the same as the first n. Sameness is not a fully defined concept. It is intuitively obvious that the problem of induction is insoluble. It is more difficult to explain why, but Karl Popper, the political philosopher and philosopher of science, makes a straightforward case that it is. The problem is insoluble, according to him, because there is no principle of induction that is true. That is, there is no way of assuredly going from a finite number of cases to a true general statement about all the relevant cases. To see this, we need only look at examples. “The sun rises every 24 hours” is false, says Popper, as formulated and normally understood, because in Polar regions there are days in the year when the sun never rises, and days in the year when it never sets. Even cases taken as examples of straightforward and solid inductive inferences can be shown to be wrong, so why should we be at all confident of more complex cases? 4 Prehistory of Science and Technology Studies to data. Another view, then, that is more loosely positivist, is that one can by purely logical means make predictions of observations from scientific theories, and that the best theories are ones that make all the right predictions. This view is perhaps best articulated as falsificationism, a position developed by (Sir) Karl Popper (e.g. 1963), a philosopher who was once on the edges of the Vienna Circle. For Popper, the key task of philosophy of science is to provide a demarcation criterion, a rule that would allow a line to be drawn between science and non-science. This he finds in a simple idea: genuine scientific theories are falsifiable, making risky predictions. The scientific attitude demands that if a theory’s prediction is falsified the theory itself is to be treated as false. Pseudo-sciences, among which Popper includes Marxism and Freudianism, are insulated from criticism, able to explain and incorporate any fact. They do not make any firm predictions, but are capable of explaining, or explaining away, anything that comes up. This is a second way in which science might be seen as a formal activity. According to Popper, scientific theories are imaginative creations, and there is no method for creating them. They are free-floating, their meaning not tied to observations as for the positivists. However, there is a strict method for evaluating them. Any theory that fails to make risky predictions is ruled unscientific, and any theory that makes failed predictions is ruled false. A theory that makes good predictions is provisionally accepted – until new evidence comes along. Popper’s scientist is first and foremost skeptical, unwilling to accept anything as proven, and willing to throw away anything that runs afoul of the evidence. On this view, progress is probably best seen as the successive refinement and enlargement of theories to cover increasing data. While science may or may not reach the truth, the process of conjectures and refutations allows it to encompass increasing numbers of facts. Like the central idea of positivism, falsificationism faces some immediate problems. Scientific theories are generally fairly abstract, and few make hard predictions without adopting a whole host of extra assumptions (e.g. Putnam 1981); so on Popper’s view most scientific theories would be unscientific. Also, when theories are used to make incorrect predictions, scientists often – and quite reasonably – look for reasons to explain away the observations or predictions, rather than rejecting the theories. Nonetheless, there is something attractive about the idea that (potential) falsification is the key to solid scientific standing, and so falsificationism, like logical positivism, still has adherents today. For both positivism and falsificationism, the features of science that make it scientific are formal relations between theories and data, whether through Prehistory of Science and Technology Studies 5 Box 1.2 The Duhem–Quine thesis The Duhem–Quine thesis is the claim that a theory can never be conclusively tested in isolation: what is tested is an entire framework or a web of beliefs. This means that in principle any scientific theory can be held in the face of apparently contrary evidence. Though neither of them put the claim quite this baldly, Pierre Duhem and W.V.O. Quine, writing in the beginning and middle of the twentieth century respectively, showed us why. How should one react if some of a theory’s predictions are found to be wrong? The answer looks straightforward: the theory has been falsified, and should be abandoned. But that answer is too easy, because theories never make predictions in a vacuum. Instead, they are used, along with many other resources, to make predictions. When a prediction is wrong, the culprit might be the theory. However, it might also be the data that set the stage for the prediction, or additional hypotheses that were brought into play, or measuring equipment used to verify the prediction. The culprit might even lie entirely outside this constellation of resources: some unknown object or process that interferes with observations or affects the prediction. To put the matter in Quine’s terms, theories are parts of webs of belief. When a prediction is wrong, one of the beliefs no longer fits neatly into the web. To smooth things out – to maintain a consistent structure – one can adjust any number of the web’s parts. With a radical enough redesign of the web, any part of it can be maintained, and any part jettisoned. One can even abandon rules of logic if one needs to! When Newton’s predictions of the path of the moon failed to match the data he had, he did not abandon his theory of gravity, his laws of motion, or any of the calculating devices he had employed. Instead, he assumed that there was something wrong with the observations, and he fudged his data. While fudging might seem unacceptable, we can appreciate his impulse: in his view, the theory, the laws, and the mathematics were all stronger than the data! Later physicists agreed. The problem lay in the optical assumptions originally used in interpreting the data, and when those were changed Newton’s theory made excellent predictions. Does the Duhem–Quine thesis give us a problem of induction? It shows that multiple resources are used (not all explicitly) to make a prediction, and that it is impossible to isolate for blame only one of those resources when the prediction appears wrong. We might, then, see the Duhem– Quine thesis as posing a problem of deduction, not induction, because it shows that when dealing with the real world, many things can confound neat logical deductions. 6 Prehistory of Science and Technology Studies the rational construction of theoretical edifices on top of empirical data or the rational dismissal of theories on the basis of empirical data. There are analogous views about mathematics; indeed, formalist pictures of science probably depend on stereotypes of mathematics as a logical or mathematical activity. But there are other features of the popular snapshot of science. These formal relations between theories and data can be difficult to reconcile with an even more fundamental intuition about science: Whatever else it does, science progresses toward truth, and accumulates truths as it goes. We can call this intuition realism, the name that philosophers have given to the claim that many or most scientific theories are approximately true. First, progress. One cannot but be struck by the increases in precision of scientific predictions, the increases in scope of scientific knowledge, and the increases in technical ability that stem from scientific progress. Even in a field as established as astronomy, calculations of the dates and times of astronomical events continue to become more precise. Sometimes this precision stems from better data, sometimes from better understandings of the causes of those events, and sometimes from connecting different pieces of knowledge. And occasionally, the increased precision allows for new technical ability or theoretical advances. Second, truths. According to realist intuitions, there is no way to understand the increase in predictive power of science, and the technical ability that flows from that predictive power, except in terms of an increase of truth. That is, science can do more when its theories are better approximations of the truth, and when it has more approximately true theories. For the realist, science does not merely construct convenient theoretical descriptions of data, or merely discard falsified theories: When it constructs theories or other claims, those generally and eventually approach the truth. When it discards falsified theories, it does so in favor of theories that better approach the truth. Real progress, though, has to be built on more or less systematic methods. Otherwise, there would only be occasional gains, stemming from chance or genius. If science accumulates truths, it does so on a rational basis, not through luck. Thus, realists are generally committed to something like formal relations between data and theories. Turning from philosophy of science, and from issues of data, evidence, and truth, we see a social aspect to the standard picture of science. Scientists are distinguished by their even-handed attitude toward theories, data, and each other. Robert Merton’s functionalist view, discussed in Chapter 3, dominated discussions of the sociology of science through the 1960s. Merton argued that science served a social function, providing certified knowledge. That function structures norms of scientific behavior, those Prehistory of Science and Technology Studies 7 Box 1.3 Underdetermination Scientists choose the best account of data from among competing hypotheses. This choice can never be logically conclusive, because for every explanation there are in principle an indefinitely large number of others that are exactly empirically equivalent. Theories are underdetermined by the empirical evidence. This is easy to see through an analogy. Imagine that our data is the collection of points in the graph on the left (Figure 1.1). The hypothesis that we create to “explain” this data is some line of best fit. But what line of best fit? The graph on the right shows two competing lines that both fit the data perfectly. Clearly there are infinitely many more lines of perfect fit. We can do further testing and eliminate some, but there will always be infinitely many more. We can apply criteria like simplicity and elegance to eliminate some of them, but such criteria take us straight back to the first problem of induction: how do we know that nature is simple and elegant, and why should we assume that our ideas of simplicity and elegance are the same as nature’s? When scientists choose the best theory, then, they choose the best theory from among those that have been seriously considered. There is little reason to believe that the best theory so far considered, out of the infinite numbers of empirically adequate explanations, will be the true one. In fact, if there are an infinite number of potential explanations, we could reasonably assign to each one a probability of zero. The status of underdetermination has been hotly debated in philosophy of science. Because of the underdetermination argument, some philosophers (positivists and their intellectual descendants) argue that scientific theories should be thought of as instruments for explaining and predicting, not as true or realistic representations (e.g. van Fraassen 1980). Realist philosophers, however, argue that there is no way of understanding the successes of science without accepting that in at least some circumstances evaluation of the evidence leads to approximately true theories (e.g. Boyd 1984; see Box 6.2). 8 Prehistory of Science and Technology Studies norms that tend to promote the accumulation of certified knowledge. For Merton, science is a well-regulated activity, steadily adding to the store of knowledge. On Merton’s view, there is nothing particularly “scientific” about the people who do science. Rather, science’s social structure rewards behavior that, in general, promotes the growth of knowledge; in principle it also penalizes behavior that retards the growth of knowledge. A number of other thinkers hold that position, such as Popper (1963) and Michael Polanyi (1962), who both support an individualist, republican ideal of science, for its ability to progress. Common to all of these views is the idea that standards or norms are the source of science’s success and authority. For positivists, the key is that theories can be no more or less than the logical representation of data. For falsificationists, scientists are held to a standard on which they have to discard theories in the face of opposing data. For realists, good methods form the basis of scientific progress. For functionalists, the norms are the rules governing scientific behavior and attitudes. All of these standards or norms are attempts to define what it is to be scientific. They provide ideals that actual scientific episodes can live up to or not, standards to judge between good and bad science. Therefore, the view of science we have seen so far is not merely an abstraction from science, but is importantly a view of ideal science. A View of Technology Where is technology in all of this? Technology has tended to occupy a secondary role, for a simple reason: it is often thought, in both popular and academic accounts, that technology is the relatively straightforward application of science. We can imagine a linear model of innovation, from basic science through applied science to development and production. Technologists identify needs, problems, or opportunities, and creatively combine pieces of knowledge to address them. Technology combines the scientific method with a practically minded creativity. As such, the interesting questions about technology are about its effects: Does technology determine social relations? Is technology humanizing or dehumanizing? Does technology promote or inhibit freedom? Do science’s current applications in technologies serve broad public goals? These are important questions, but as they take technology as a finished product they are normally divorced from studies of the creation of particular technologies. Prehistory of Science and Technology Studies 9 If technology is applied science then it is limited by the limits of scientific knowledge. On the common view, then, science plays a central role in determining the shape of technology. There is another form of determinism that often arises in discussions of technology, though one that has been more recognized as controversial. A number of writers have argued that the state of technology is the most important cause of social structures, because technology enables most human action. People act in the context of available technology, and therefore people’s relations among themselves can only be understood in the context of technology. While this sort of claim is often challenged – by people who insist on the priority of the social world over the material one – it has helped to focus debate almost exclusively on the effects of technology. Lewis Mumford (1934, 1967) established an influential line of thinking about technology. According to Mumford, technology comes in two varieties. Polytechnics are “life-oriented,” integrated with broad human needs and potentials. Polytechnics produce small-scale and versatile tools, useful for pursuing many human goals. Monotechnics produce “mega machines” that can increase power dramatically, but by regimenting and dehumanizing. A modern factory can produce extraordinary material goods, but only if workers are disciplined to participate in the working of the machine. This distinction continues to be a valuable resource for analysts and critics of technology (see, e.g., Franklin 1990, Winner 1986). In his widely read essay “The Question Concerning Technology” (1977 [1954]), Martin Heidegger develops a similar position. For Heidegger, distinctively modern technology is the application of science in the service of power; this is an objectifying process. In contrast to the craft tradition that produced individualized things, modern technology creates resources, objects made to be used. From the point of view of modern technology, the world consists of resources to be turned into new resources. A technological worldview thus produces a thorough disenchantment of the world. Through all of this thinking, technology is viewed as simply applied science. For both Mumford and Heidegger modern technology is shaped by its scientific rationality. Even the pragmatist philosopher John Dewey (e.g. 1929), who argues that all rational thought is instrumental, sees science as theoretical technology (using the word in a highly abstract sense) and technology (in the ordinary sense) as applied science. Interestingly, the view that technology is applied science tends toward a form of technological determinism. For example, Jacques Ellul (1964) defines technique as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development)” (quoted in Mitcham 1994: 308). A society that has accepted modern technology finds itself on a path of increasing 10 Prehistory of Science and Technology Studies efficiency, allowing technique to enter more and more domains. The view that a formal relation between theories and data lies at the core of science informs not only our picture of science, but of technology. Concerns about technology have been the source of many of the movements critical of science. After the US use of nuclear weapons on Hiroshima and Nagasaki in World War II, some scientists and engineers who had been involved in developing the weapons began The Bulletin of the Atomic Scientists, a magazine alerting its readers about major dangers stemming from the military and industrial technologies. Starting in 1955, the Pugwash Conferences on Science and World Affairs responded to the threat of nuclear war, as the United States and the Soviet Union armed themselves with nuclear weapons. Science and the technologies to which it contributes often result in very unevenly distributed benefits, costs, and risks. Organizations like the Union of Concerned Scientists, and Science for the People, recognized this uneven distribution. Altogether, the different groups that made up the Radical Science Movement engaged in a critique of the idea of progress, with technological progress as their main target (Cutliffe 2000). Parallel to this in the academy, “Science, Technology and Society” became, starting in the 1970s, the label for a diverse group united by progressive goals and an interest in science and technology as problematic social institutions. For researchers on Science, Technology and Society the project of understanding the social nature of science has generally been seen as continuous with the project of promoting a socially responsible science (e.g. Ravetz 1971; Spiegel-Rösing and Price 1977; Cutliffe 2000). The key issues for Science, Technology and Society are about reform, about promoting disinterested science, and about technologies that benefit the widest populations. How can sound technical decisions be made democratically (Laird 1993)? Can and should innovation be democratically controlled (Sclove 1995)? To what extent, and how, can technologies be treated as political entities (Winner 1986)? Given that researchers, knowledge, and tools flow back and forth between academia and industry, how can we safeguard pure science (Dickson 1988; Slaughter and Leslie 1997)? This is the other “STS,” which has played a major role in Science and Technology Studies, the former being both an antecedent of and now a part of the latter. A Preview of Science and Technology Studies Science and Technology Studies (STS) starts from an assumption that science and technology are thoroughly social activities. They are social in that Prehistory of Science and Technology Studies 11 scientists and engineers are always members of communities, trained into the practices of those communities and necessarily working within them. These communities set standards for inquiry and evaluate knowledge claims. There is no abstract and logical scientific method apart from evolving community norms. In addition, science and technology are arenas in which rhetorical work is crucial, because scientists and engineers are always in the position of having to convince their peers and others of the value of their favorite ideas and plans – they are constantly engaged in struggles to gain resources and to promote their views. The actors in science and technology are also not mere logical operators, but instead have investments in skills, prestige, knowledge, and specific theories and practices. Even conflicts in a wider society may be mirrored by and connected to conflicts within science and technology; for example, splits along gender, race, class, and national lines can occur both within science and in the relations between scientists and non-scientists. STS takes a variety of anti-essentialist positions with respect to science and technology. Neither science nor technology is a natural kind, having simple properties that define it once and for all. The sources of knowledge and artifacts are complex and various: there is no privileged scientific method that can translate nature into knowledge, and no technological method that can translate knowledge into artifacts. In addition, the interpretations of knowledge and artifacts are complex and various: claims, theories, facts, and objects may have very different meanings to different audiences. For STS, then, science and technology are active processes, and should be studied as such. The field investigates how scientific knowledge and technological artifacts are constructed. Knowledge and artifacts are human products, and marked by the circumstances of their production. In their most crude forms, claims about the social construction of knowledge leave no role for the material world to play in the making of knowledge about it. Almost all work in STS is more subtle than that, exploring instead the ways in which the material world is used by researchers in the production of knowledge. STS pays attention to the ways in which scientists and engineers attempt to construct stable structures and networks, often drawing together into one account the variety of resources used in making those structures and networks. So a central premise of STS is that scientists and engineers use the material world in their work; it is not merely translated into knowledge and objects by a mechanical process. Clearly, STS tends to reject many of the elements of the common view of science. How and in what respects are the topics of the rest of this book. 2 The Kuhnian Revolution Thomas Kuhn’s The Structure of Scientific Revolutions (1970, first published in 1962) challenged the dominant popular and philosophical pictures of the history of science. Rejecting the formalist view with its normative stance, Kuhn focused on the activities of and around scientific research: in his work science is merely what scientists do. Rejecting steady progress, he argued that there have been periods of normal science punctuated by revolutions. Kuhn’s innovations were in part an ingenious reworking of portions of the standard pictures of science, informed by rationalist emphases on the power of ideas, by positivist views on the nature and meaning of theories, and by Ludwig Wittgenstein’s ideas about forms of life and about perception. The result was novel, and had an enormous impact. One of the targets of The Structure of Scientific Revolutions is what is known (since Butterfield 1931) as “Whig history,” history that attempts to construct the past as a series of steps toward (and occasionally away from) present views. Especially in the history of science there is a temptation to see the past through the lens of the present, to see moves in the direction of what we now believe to be the truth as more rational, more natural, and less needing of causal explanation than opposition to what we now believe. But since events must follow their causes, a sequence of events in the history of science cannot be explained teleologically, simply by the fact that they represent progress. Whig history is one of the common buttresses of too-simple progressivism in the history of science, and its removal makes room for explanations that include more irregular changes. According to Kuhn, normal science is the science done when members of a field share a recognition of key past achievements in their field, beliefs about which theories are right, an understanding of the important problems of the field, and methods for solving those problems. In Kuhn’s terminology, scientists doing normal science share a paradigm. The term, originally referring to a grammatical model or pattern, draws particular attention to The Kuhnian Revolution 13 Box 2.1 The modernity of science Many commentators on science have felt that it is a particularly modern institution. By this they generally mean that it is exceptionally rational, or exceptionally free of local contexts. While science’s exceptionality in either of these senses is contentious, there is a straightforward sense in which science is, and always has been, modern. As Derek de Solla Price (1986 [1963]) has pointed out, science has grown rapidly over the past three hundred years. In fact, by any of a number of indicators, science’s growth has been steadily exponential. Science’s share of the US gross national product has doubled every 20 years. The cumulative number of scientific journals founded has doubled every 15 years, as has the membership in scientific institutes, and the number of people with scientific or technical degrees. The numbers of articles in many sub-fields have doubled every 10 years. These patterns cannot continue indefinitely – and in fact have not continued since Price did his analysis. A feature of this extremely rapid growth is that between 80 and 90 percent of all the scientists who have ever lived are alive now. For a senior scientist, between 80 and 90 percent of all the scientific articles ever written were written during his or her lifetime. For working scientists the distant past of their fields is almost entirely irrelevant to their current research, because the past is buried under masses of more recent accomplishments. Citation patterns show, as one would expect, that older research is considered less relevant than more recent research, perhaps having been superseded or simply left aside. For Price, a “research front” in a field at some time can be represented by the network of articles that are frequently cited. The front continually picks up new articles and drops old ones, as it establishes new problems, techniques, and solutions. Whether or not there are paradigms as Kuhn sees them, science pays most attention to current work, and little to its past. Science is modern in the sense of having a present-centered outlook, leaving its past to historians. Rapid growth also gives science the impression of youth. At any time, a disproportionate number of scientists are young, having recently entered their fields. This creates the impression that science is for the young, even though individual scientists may make as many contributions in middle age as in youth (Wray 2003). 14 The Kuhnian Revolution a scientific achievement that serves as an example for others to follow. Kuhn also assumes that such achievements provide theoretical and methodological tools for further research. Once they were established, Newton’s mechanics, Lavoisier’s chemistry, and Mendel’s genetics each structured research in their respective fields, providing theoretical frameworks for and models of successful research. Although it is tempting to see it as a period of stasis, normal science is better viewed as a period in which research is well structured. The theoretical side of a paradigm serves as a worldview, providing categories and frameworks into which to slot phenomena. The practical side of a paradigm serves as a form of life, providing patterns of behavior or frameworks for action. For example, Lavoisier’s ideas about elements and the conservation of mass formed frameworks within which later chemists generated further ideas. The importance he attached to measurement instruments, and the balance in particular, shaped the work practices of chemistry. Within paradigms research goes on, often with tremendous creativity – though always embedded in firm conceptual and social backdrops. Kuhn talks of normal science as puzzle-solving, because problems are to be solved within the terms of the paradigm: failure to solve a problem usually reflects badly on the researcher, rather than on the theories or methods of the paradigm. With respect to a paradigm, an unsolved problem is simply an anomaly, fodder for future researchers. In periods of normal science the paradigm is not open to serious question. This is because the natural sciences, on Kuhn’s view, are particularly successful at socializing practitioners. Science students are taught from textbooks that present standardized views of fields and their histories; they have lengthy periods of training and apprenticeship; and during their training they are generally asked to solve well-understood and well-structured problems, often with well-known answers. Nothing good lasts forever, and that includes normal science. Because paradigms can only ever be partial representations and partial ways of dealing with a subject matter, anomalies accumulate, and may eventually start to take on the character of real problems, rather than mere puzzles. Real problems cause discomfort and unease with the terms of the paradigm, and this allows scientists to consider changes and alternatives to the framework; Kuhn terms this a period of crisis. If an alternative is created that solves some of the central unsolved problems, then some scientists, particularly younger scientists who have not yet been fully indoctrinated into the beliefs and practices or way of life of the older paradigm, will adopt the alternative. Eventually, as older and conservative scientists become marginalized, a robust alternative may become a paradigm itself, structuring a new period of normal science. The Kuhnian Revolution 15 Box 2.2 Foundationalism Foundationalism is the thesis that knowledge can be traced back to firm foundations. Typically those foundations are seen as a combination of sensory impressions and rational principles, which then support an edifice of higher-order beliefs. The central metaphor of foundationalism, of a building firmly planted in the ground, is an attractive one. If we ask why we hold some belief, the reasons we give come in the form of another set of beliefs. We can continue asking why we hold these beliefs, and so on. Like bricks, each belief is supported by more beneath it (there is a problem here of the nature of the mortar that holds the bricks together, but we will ignore that). Clearly, the wall of bricks cannot continue downward forever; we do not support our knowledge with an infinite chain of beliefs. But what lies at the foundation? The most plausible candidates for empirical foundations are sense experiences. But how can these ever be combined to support the complex generalizations that form our knowledge? We might think of sense experiences, and especially their simplest components, as like individual data points. Here we have the earlier problems of induction all over again: as we have seen, a finite collection of data points cannot determine which generalizations to believe. Worse, even beliefs about sense impressions are not perfectly secure. Much of the discussion around Kuhn’s The Structure of Scientific Revolutions (1970 [1962] ) has focused on his claim that scientific revolutions change what scientists observe (Box 2.3). Even if Kuhn’s emphasis is wrong, it is clear that we often doubt what we see or hear, and reinterpret it in terms of what we know. The problem becomes more obvious, as the discussion of the Duhem–Quine thesis (Box 1.2) shows, if we imagine the foundations to be already-ordered collections of sense impressions. On the one hand, then, we cannot locate plausible foundations for the many complex generalizations that form our knowledge. On the other hand, nothing that might count as a foundation is perfectly secure. We are best off to abandon, then, the metaphor of solid foundations on which our knowledge sits. According to Kuhn, it is in periods of normal science that we can most easily talk about progress, because scientists have little difficulty recognizing each other’s achievements. Revolutions, however, are not progressive, because they both build and destroy. Some or all of the research structured by the 16 The Kuhnian Revolution pre-revolutionary paradigm will fail to make sense under the new regime; in fact Kuhn even claims that theories belonging to different paradigms are incommensurable – lacking a common measure – because people working in different paradigms see the world differently, and because the meanings of theoretical terms change with revolutions (a view derived in part from positivist notions of meaning). The non-progressiveness of revolutions and the incommensurability of paradigms are two closely related features of the Kuhnian account that have caused many commentators the most difficulty. If Kuhn is right, science does not straightforwardly accumulate knowledge, but instead moves from one more or less adequate paradigm to another. This is the most radical implication found in The Structure of Scientific Revolutions: Science does not track the truth, but creates different partial views that can be considered to contain truth only by people who hold those views! Kuhn’s claim that theories within paradigms are incommensurable has a number of different roots. One of those roots lies in the positivist picture of meaning, on which the meanings of theoretical terms are related to observations they imply. Kuhn adopts the idea that the meanings of theoretical terms depend upon the constellation of claims in which they are embedded. A change of paradigms should result in widespread changes in the meanings of key terms. If this is true, then none of the key terms from one paradigm would map neatly onto those of another, preventing a common measure, or even full communication. Secondly, in The Structure of Scientific Revolutions, Kuhn takes the notion of indoctrination quite seriously, going so far as to claim that paradigms even shape observations. People working within different paradigms see things differently. Borrowing from the work of N. R. Hanson (1958), Kuhn argues there is no such thing, at least in normal circumstances, as raw observation. Instead, observation comes interpreted: we do not see dots and lines in our visual fields, but instead see more or less recognizable objects and patterns. Thus observation is guided by concepts and ideas. This claim has become known as the theory-dependence of observation. The theorydependence of observation is easily linked to Kuhn’s historical picture, because during revolutions people stop seeing one way, and start seeing another way, guided by the new paradigm. Finally, one of the roots of Kuhn’s claims about incommensurability is his experience as an historian that it is difficult to make sense of past scientists’ problems, concepts, and methods. Past research can be opaque, and aspects of it can seem bizarre. It might even be said that if people find it too easy to understand very old research in present terms they are probably The Kuhnian Revolution 17 doing some interpretive violence to that research – Isaac Newton’s physics looks strikingly modern when rewritten for today’s textbooks, but looks much less so in its originally published form, and even less so when the connections between it and Newton’s religious and alchemical research are drawn (e.g. Dobbs and Jacob 1995). Kuhn says that “In a sense that I am unable to explicate further, the proponents of competing paradigms practice their trades in different worlds” (1970 [1962]: 150). The case for semantic incommensurability has attracted a considerable amount of attention, mostly negative. Meanings of terms do change, but they probably do not change so much and so systematically that claims in which they are used cannot typically be compared. Most of the philosophers, linguists, and others who have studied this issue have come to the conclusion that claims for semantic incommensurability cannot be sustained, or even that it is impossible (Davidson 1974) to make sense of such radical change in meaning (see Bird 2000 for an overview). This leaves the historical justification for incommensurability. That problems, concepts, and methods change is uncontroversial. But the difficulties that these create for interpreting past episodes in science can be overcome – the very fact that historical research can challenge present-centered interpretations shows the limits of incommensurability. Claims of radical incommensurability appear to fail. In fact, Kuhn quickly distanced himself from the strongest readings of his claims. Already by 1965 he insisted that he meant by “incommensurability” only “incomplete communication” or “difficulty of translation,” sometimes leading to “communication breakdown” (Kuhn 1970a). Still, on these more modest readings incommensurability is an important phenomenon: even when dealing with the same subject matter, scientists (among others) can fail to communicate. If there is no radical incommensurability, then there is no radical division between paradigms, either. Paradigms must be linked by enough continuity of concepts and practices to allow communication. This may even be a methodological or theoretical point: complete ruptures in ideas or practices are inexplicable (Barnes 1982). When historians want to explain an innovation, they do so in terms of a reworking of available resources. Every new idea, practice, and object has its sources; to assume otherwise is to invoke something akin to magic. Thus many historians of science have challenged Kuhn’s paradigms by showing the continuity from one putative paradigm to the next. For example, instruments, theories, and experiments change at different times. In a detailed study of particle detectors in physics, Peter Galison (1997) shows that new detectors are initially used for the same types of experiments 18 The Kuhnian Revolution and observations as their immediate predecessors had been, and fit into the same theoretical contexts. Similarly, when theories change, there is no immediate change in either experiments or instruments. Discontinuity in one realm, then, is at least generally bounded by continuity in others. Science gains strength, an ad hoc unity, from the fact that its key components rarely change together. Science maintains stability through change by being disunified, like a thread as described by Wittgenstein (1958): “the strength of the thread does not reside in the fact that some one fibre runs through its whole length, but in the overlapping of many fibres.” If this is right then the image of complete breaks between periods is misleading. Box 2.3 The theory-dependence of observation Do people’s beliefs shape their observations? Psychologists have long studied this question, showing how people’s interpretations of images are affected by what they expect those images to show. Hanson and Kuhn took the psychological results to be important for understanding how science works. Scientific observations, they claim, are theory-dependent. For the most part, philosophers, psychologists, and cognitive scientists agree that observations can be shaped by what people believe. There are substantial disagreements, though, about how important this is for understanding science. For example, a prominent debate about visual illusions and the extent to which the background beliefs that make them illusions are plastic (e.g. Churchland 1988; Fodor 1988) has been sidelined by a broader interpretation of “observation.” Scientific observation has been and is rarely equivalent to brute perception, experienced by an isolated individual (Daston 2008). Much scientific data is collected by machine, and then is organized by scientists to display phenomena publicly (Bogen and Woodward 1992). If that organization amounts to observation, then it is straightforward that observation is theory-dependent. Theory and practice dependence is broader even than that: scientists attend to objects and processes that background beliefs suggest are worth looking at, they design experiments around theoretically inspired questions, they remember relevance and communicate relevant information, where relevance depends on established practices and shared theoretical views (Brewer and Lambert 2001). The Kuhnian Revolution 19 Incommensurability: Communicating Among Social Worlds Claims about the incommensurability of scientific paradigms raise general questions about the extent to which people across boundaries can communicate. In some sense it is trivial that disciplines (or smaller units, like specialties) are incommensurable. The work done by a molecular biologist is not obviously interesting or comprehensible to an evolutionary ecologist or a neuropathologist, although with some translation it can sometimes become so. The meaning of terms, ideas, and actions is connected to the cultures and practices from which they stem. Disciplines are “epistemic cultures” that may have completely different orientations to their objects, social units of knowledge production, and patterns of interaction (Knorr Cetina 1999). However, people from different areas interact, and as a result science gains a degree of unity. We might ask, then, how interactions are made to work. Simplified languages allow parties to trade goods and services without concern for the integrity of local cultures and practices. A trading zone (Galison 1997) is an area in which scientific and/or technical practices can fruitfully interact via these simplified languages or pidgins, without requiring full assimilation. Trading zones can develop at the contact points of specialties, around the transfer of valuable goods from one to another. In trading zones, collaborations can be successful even if the cultures and practices that are brought together do not agree on problems or definitions. The trading zone concept is flexible, perhaps overly so. We might look at almost any communication as taking place in a trading zone and demanding some pidgin-like language. For example, Richard Feynman’s diagrams of particle interactions, which later became known as Feynman diagrams, were successful in part because they were simple and could be interpreted in various ways (Kaiser 2005). They were widely spread during the 1950s by visiting postdoctoral fellows and researchers. But different schools, working with different theoretical frameworks, picked them up, adapted them, and developed local styles of using them. Despite their variety, they remained important ways of communicating among physicists, and also tools that were productive of theoretical problems and insights. It would seem to stretch the “trading zone” concept to say that Feynman diagrams were parts of pidgins needed for theoretical physicists to talk to each other, yet that is what they look like. 20 The Kuhnian Revolution A different, but equally flexible, concept for understanding communication across barriers is the idea of boundary objects (Star and Griesemer 1989). In a historical case study of interactions in Berkeley’s Museum of Vertebrate Zoology, Susan Leigh Star and James Griesemer focus on objects, rather than languages. The different social worlds of amateur collectors, professional scientists, philanthropists, and administrators had very different visions of the museum, its goals, and the important work to be done. These differences resulted in incommensurabilities among groups. However, objects can form bridges across boundaries, if they can serve as a focus of attention in different social worlds, and are robust enough to maintain their identities in those different worlds. Standardized records were among the key boundary objects that held together these different social worlds. Records of the specimens had different meanings for the different groups of actors, but each group could contribute to and use those records. The practices of each group could continue intact, but the groups interacted via record keeping. Boundary objects, then, allow for a certain amount of coordination of actions without large measures of translation. The boundary object concept has been picked up and used in an enormous number of ways. Even within the article in which they introduce the concept, Star and Griesemer present a number of different examples of boundary objects, including the zoology museum itself, the different animal species in the museum’s scope, the state of California, and standardized records of specimens. The concept has been applied very widely in STS. To take just a few examples: Sketches and drawings can allow engineers in different parts of design and production processes to communicate across boundaries (Henderson 1991). Parameterizations of climate models, the filling in of variables to bring those models in line with the world’s weather, connect field meteorologists and simulation modelers (Sundberg 2007). In the early twentieth century breeds of rabbits and poultry connected fanciers to geneticists and commercial breeders (Marie 2008). Why are there so many different boundary objects? The number and variety suggest that, despite some incommensurability across social boundaries, there is considerable coordination and probably even some level of communication. For example, in multidisciplinary research a considerable amount of communication is achieved via straightforward translation (Duncker 2001). Researchers come to understand what their colleagues in other disciplines know, and translate what they have to say into a language that those colleagues can understand. Simultaneously, they listen to what other people The Kuhnian Revolution 21 have to say and read what other people write, attuned to differences in knowledge, assumptions, and focus. Concepts like pidgins, trading zones, and boundary objects, while they might be useful in particular situations, may overstate difficulties in communication. Incommensurability as it is found in many practices may not always be a very serious barrier. The divisions of the sciences result in disunity (see Dupré 1993; Galison and Stump 1996). A disunified science requires communication, perhaps in trading zones or direct translation, or coordination, perhaps via boundary objects, so that its many fibers are in fact twisted around each other. Even while disunified, though, science hangs together and has some stability. How it does so remains an issue that merits investigation. Conclusion: Some Impacts The Structure of Scientific Revolutions had an immediate impact. The word “paradigm,” referring to a way things are done or seen, came into common usage largely because of Kuhn. Even from the short description above it is clear that the book represents a challenge to earlier important beliefs about science. Against the views of science with which we started, The Structure of Scientific Revolutions argues that scientific communities are importantly organized around ideas and practices, not around ideals of behavior. And, they are organized from the bottom up, not, as functionalism would have it, to serve an overarching goal. Against positivism, Kuhn argued that changes in theories are not driven by data but by changes of vision. In fact, if worldviews are essentially theories then data is subordinate to theory, rather than the other way around. Against falsificationism, Kuhn argued that anomalies are typically set aside, that only during revolutions are they used as a justification to reject a theory. And against all of these he argued that on the largest scales the history of science should not be told as a story of uninterrupted progress, but only change. Because Kuhn’s version of science violated almost everybody’s ideas of the rationality and progress of science, The Structure of Scientific Revolutions was sometimes read as claiming that science is fundamentally irrational, or describing science as “mob rule.” In retrospect it is difficult to find much irrationalism there, and possible to see the book as somewhat conservative – perhaps not only intellectually conservative but politically conservative (Fuller 2000). More important, perhaps, is the widespread perception that by examining history Kuhn firmly refuted the standard view of science. 22 The Kuhnian Revolution Whether or not that is true, Kuhn started people thinking about science in very different terms. The success of the book created a space for thinking about the practices of science in local terms, rather than in terms of their contribution to progress, or their exemplification of ideals. Though few of Kuhn’s specific ideas have survived fully intact, The Structure of Scientific Revolutions has profoundly affected subsequent thinking in the study of science and technology. 3 Questioning Functionalism in the Sociology of Science Structural-functionalism Robert Merton’s statement, “The institutional goal of science is the extension of certified knowledge” (1973: 270), is the supporting idea behind his thinking on science. His structural-functionalist view assumes that society as a whole can be analyzed in terms of overarching institutions such as religion, government, and science. Each institution, when working well, serves a necessary function, contributing to the stability and flourishing of society. To work well, these institutions must have the appropriate structure. Merton treats science, therefore, as a roughly unified and singular institution, the function of which is to provide certified knowledge. The work of the sociologist is primarily to study how its social structure does and does not support its function. Merton is the most prominent of functionalist sociologists of science, and so his work is the main focus of this chapter, to the neglect of such sociologists as Joseph Ben-David (1991) and John Ziman (1984), and sociologically minded philosophers like David Hull (1988). The key to Merton’s theory of the social structure of science lies in the ethos of science, the norms of behavior that guide appropriate scientific practice. Merton’s norms are institutional imperatives, in that rewards are given to community members who follow them, and sanctions are applied to those who violate them. Most important in this ethos are the four norms first described in 1942: universalism, communism, disinterestedness, and organized skepticism. Universalism requires that the criteria used to evaluate a claim not depend upon the identity of the person making the claim: “race, nationality, religion, class, and personal qualities are irrelevant” (Merton 1973: 270). This should stem from the supposed impersonality of scientific laws; they are either true or false, regardless of their proponents and their provenance. How does the norm of universalism apply in practice? We might look to 24 Questioning Functionalism science’s many peer review systems. For example, most scientific journals accept articles for publication based on evaluations by experts. And in most fields, those experts are not told the identity of the authors whose articles they are reviewing. Although not being told the author’s name does not guarantee his or her anonymity – because in many fields a well-connected reviewer can guess the identity of an author from the content of the article – it supports universalism nonetheless, both in practice and as an ideal. Communism states that scientific knowledge – the central product of science – is commonly owned. Originators of ideas can claim recognition for their creativity, but cannot dictate how or by whom those ideas are to be used. Results should be publicized, so that they can be used as widely as possible. This serves the ends of science, because it allows researchers access to many more findings than they could hope to create on their own. According to Merton, communism not only promotes the goals of science but reflects the fact that science is a social activity, or that scientific achievements are collectively produced. Even scientific discoveries by isolated individuals arise as a result of much earlier research. Disinterestedness is a form of integrity, demanding that scientists disengage their interests from their actions and judgments. They are expected to report results fully, no matter what theory those results support. Disinterestedness should rule out fraud, such as reporting fabricated data, because fraudulent behavior typically represents the intrusion of interests. And indeed, Merton believes that fraud is rare in science. Organized skepticism is the tendency for the community to disbelieve new ideas until they have been well established. Organized skepticism operates at two levels. New claims are often greeted by arrays of public challenges. For example, even an audience favorably disposed to its claims may fiercely question a presentation at a conference. In addition, scientists may privately reserve judgment on new claims, employing an internalized version of the norm. In addition to these “moral” norms there are “cognitive” norms concerning rules of evidence, the structure of theories, and so on. Because Merton drew a firm distinction between social and technical domains, cognitive norms are not a matter for his sociology of science to investigate. In general, Merton’s sociology does not make substantial claims about the intellectual content of science. Institutional norms work in combination with rewards and sanctions, in contexts in which community members are socialized to respond to those rewards and sanctions. Rewards in the scientific community are almost entirely honorific. As Merton identifies them, the highest rewards come via eponymy: Darwinian biology, the Copernican system, Planck’s constant, and Halley’s comet all recognize enormous achievements. Other forms of honorific Questioning Functionalism 25 reward are prizes and historical recognition; the most ordinary form of scientific reward is citation of one’s work by others, seen as an indication of influence. Sanctions are similarly applied in terms of recognition, as the reputations of scientists who display deviant behavior suffer. In the 1970s, the Mertonian picture of the ethos of science came under attack, on a variety of instructive grounds. Although there were many criticisms, probably the three most important questions asked were: (1) Is the actual conduct of science governed by Mertonian norms? To be effective, norms of behavior must become part of the culture and institutions of science. In addition, there must be sanctions that can be applied when scientists deviate from the norms; but there is little evidence of strong sanctions for violation of these norms. (2) Are these norms too flexible or vague to perform any analytic or scientific work? (3) Does it make sense to talk of an institutional or overarching goal of something as complex, divided, and evolving as science? These and other questions created a serious challenge to that view, a challenge that helped to push STS toward more local, action-oriented views. Ethos and Ethics Social norms establish not only an ethos of science but an ethics of science. Violations of norms are, importantly, ethical lapses. This aspect of Merton’s picture has given rise to some interesting attempts to understand and define scientific misconduct, a topic of increasing public interest (Guston 1999a). On the structural-functionalist view, the public nature of science should mean that deviant behavior is rare. At the same time, deviance is to be expected, as a result of conflicts among norms. In particular, science’s reward system is the payment for contributions to communally owned results. However, the pressures of recognition can often create pressures to violate other norms. A disinterested attitude toward one’s own data, for example, may go out the window when recognition is importantly at stake, and this may create pressure to fudge results. Fraud and other forms of scientific misconduct occur because of the structures that advance knowledge, not despite them. Questions of misconduct often run into a problem of differentiating between fraud and error, both of which can stand in the way of progress. The structural-functionalist view explains why fraud is reprehensible, while error is merely undesirable. The difference between them is the difference between the violation of social and cognitive norms (Zuckerman 1977, 1984). Such models continue to shape discussions of scientific misconduct. The US National Academy of Sciences’ primer on research ethics, On Being a 26 Questioning Functionalism Box 3.1 Is fraud common? There are enormous pressures on scientists to perform, and to establish careers. Yet there are difficulties in replicating experiments, there is an elite system that allows some researchers to be relatively immune from scrutiny, and there is an unwillingness of the scientific community to level accusations of outright fraud (Broad and Wade 1982). It is difficult, then, to know just how common fraud is, but there is reason to suspect that it might be common. Because of its substantial role in funding scientific research, the US Congress has on several occasions held hearings to address fraud. Prominently, Congressman Albert Gore, Jr. held hearings in 1981 in response to a rash of allegations of fraud at prominent institutions, and Congressman John Dingell held a series of hearings, starting in 1988, that featured “the Baltimore case” (Kevles 1998). David Baltimore was a Nobel Prize-winning biologist who became entangled in accusations against one of his co-authors on a 1986 publication. The events became “the Baltimore case” because he was the most prominent of the scientific actors, and because he persistently and sometimes pugnaciously defended the accused researcher, Thereza Imanishi-Kari. In 1985, Imanishi-Kari was an immunologist at the Massachusetts Institute of Technology (MIT), under pressure to publish enough research to merit tenure. She collaborated with Baltimore and four other researchers on an experiment on DNA rearrangement, the results of which were published. A postdoctoral researcher in Imanishi-Kari’s laboratory, Margot O’Toole, was assigned some follow-up research, but was unable to repeat the original results. O’Toole became convinced that the published data was not the same as the data contained in the laboratory notebooks. After a falling-out between Imanishi-Kari and O’Toole and a graduate student, Charles Maplethorpe, questions about fraud started working their way up through MIT. Settled in Imanishi-Kari’s favor at the university, Maplethorpe alerted National Institutes of Health scientists Ned Feder and Walter W. Stewart to the controversy. Because of an earlier case, Feder and Stewart had become magnets for, and were on their way to becoming advocates of, the investigation of scientific fraud. They brought the case to the attention of Congressman Dingell. In the US Congress the case became a much larger confrontation. Baltimore defended Imanishi-Kari and attacked the inquiry as a witch-hunt; a number of his scientific colleagues thought his tack unwise, because of Questioning Functionalism 27 the publicity he generated, and because he was increasingly seen as an interested party. Dingell found in Baltimore an opponent who was important enough to be worth taking down, and in O’Toole a convincing witness. Over the course of the hearings, Baltimore’s conduct was made to look unprofessional, to the extent that he resigned his position as President of Rockefeller University. However, Imanishi-Kari was later exonerated, and Baltimore was seen as having taken a courageous stand (Kevles 1998). This raises questions about the nature of any accusation of fraud. At the same time, though, the case reinforces suspicions about the possible commonness of scientific fraud: the pressure to publish was substantial; the experiments were difficult to repeat; whether there had been fraud, or even substantial error, was open to interpretation; and the local scientific investigation was quick, though perhaps correct, to find no evidence of fraud. Scientist (1995), is a widely circulated booklet containing discussions of different scenarios and principles. Ethical norms, more concrete and nuanced than Merton’s, are presented as being in the service of the advancement of knowledge. That is, the resolution of most ethical problems in science typically turns on understanding how to best maintain the scientific enterprise. Functionalism about science, then, can translate more or less directly into ethical advice. Is the Conduct of Science Governed by Mertonian Norms? Are the norms of science constant through history and across science? A cursory look at different broad periods suggests that they are not constant, and consideration of different roles that scientists can play shows that norms can be interpreted differently by different actors (e.g. Zabusky and Barley 1997). Are they distinctive to science? Universalism, disinterestedness, and organized skepticism are at some level professed norms for many activities in many societies, and may not be statistically more common in science than elsewhere. Disinterestedness, for example, is a version of a norm of rationality, in that it privileges rationality over special interests, but rationality is professed nearly everywhere. People inside and outside of science claim that they generally act rationally. What evidence could show us that science is particularly rational? 28 Questioning Functionalism What about social versus cognitive norms? As we saw in the last chapter, Kuhn describes the work of normal science as governed by a paradigm, and thus by ideas specific to particular areas of research and times. If this is right, then normal science is shaped by solidarities built around key ideas, not around general behaviors. For example, Kuhn sees scientific education as authoritarian, militating against skepticism in favor of commonly held general beliefs. It seems likely that cognitive norms are more important to scientists’ work than are any general moral norms (Barnes and Dolby 1970). This point can be seen in another criticism of Mertonian norms, put forward by Michael Mulkay (1969), using the example of the furor over the work of Immanuel Velikovsky. In his 1950 book Worlds in Collision Velikovsky argued that historical catastrophes, recorded in the Bible and elsewhere, were the result of a near-collision between Earth and a planetsized object that broke off of Jupiter. The majority of mainstream scientists saw this as sensational pseudo-science. Mulkay uses the case to show one form of deviance from Mertonian norms in science: In February, 1950, severe criticisms of Velikovsky’s work were published in Science News Letter by experts in the fields of astronomy, geology, archaeology, anthropology, and oriental studies. None of these critics had at that time seen Worlds in Collision, which was only just going into press. Those denunciations were founded upon popularized versions published, for example, in Harper’s, Reader’s Digest and Collier’s. The author of one of these articles, the astronomer Harlow Shapley, had earlier refused to read the manuscript of Velikovsky’s book because Velikovsky’s “sensational claims” violated the laws of mechanics. Clearly the “laws of mechanics” here operate as norms, departure from which cannot be tolerated. As a consequence of Velikovsky’s non-conformity to these norms Shapley and others felt justified in abrogating the rules of universalism and organized skepticism. They judged the man instead of his work . . . (Mulkay 1969: 32–33) Scientists violated Mertonian norms in the name of a higher one: claims should be consistent with well-established truths. One could argue that, even on Mertonian terms, violation of the norms in the name of truth makes sense, since those norms are supposed to represent a social structure that aids the discovery of truths. Nonetheless, this type of case shows one way in which moral norms are subservient to cognitive norms. So far, we have seen that Merton’s social norms may not be as important as cognitive norms to understanding the practice of science. But what if we looked at the practice of science and discovered that the opposites of those norms – secrecy, particularism, interestedness, and credulity – were common? Questioning Functionalism 29 Do scientific communities and their institutions sanction researchers who are, say, secretive about their work? There are, after all, obvious reasons to be secretive. If other researchers learn about one’s ideas, methods, or results, they may be in a position to use that information to take the next steps in a program of research, and receive full credit for whatever comes of those steps. Given that science is highly competitive, and given that an increasing amount of science is linked to applications on which there are possible financial stakes (Chapter 17), there are strong incentives to follow through on a research program before letting other researchers know about it. On the structural-functionalist picture, norms exist to counteract local interests such as recognition and monetary gain, so that the larger goal – the growth of knowledge – is served. If Merton is right, we should expect to see violations of norms subject to sanctions. In a study of scientists working on the Apollo moon project, Ian Mitroff (1974) shows not only that scientists do not apply sanctions, but that they often respect what he calls counter-norms, which are rough opposites of Mertonian ones. Scientists interviewed by Mitroff voiced approval of, for example, interested behavior (1974: 588): “Commitment, even extreme commitment such as bias, has a role to play in science and can serve science well.” “Without commitment one wouldn’t have the energy, the drive to press forward against extremely difficult odds.” “The [emotionally] disinterested scientist is a myth. Even if there were such a being, he probably wouldn’t be worth much as a scientist.” Mitroff ’s subjects identified positive value in opposites to each of Merton’s norms: Scientific claims are judged in terms of who makes them. Secrecy is valued because it allows scientists to follow through on research programs without worrying about other people doing the same work. Dogmatism allows people to build on others’ results without worrying about foundations. If there are both norms and counter-norms, then the analytical framework of norms does no work. A framework of norms and counter-norms can justify anything, which means that it does not help to understand anything. Moreover, this is not just a methodological problem for theorists, but is also a problem for norm-based actions. When scientists act, norms and counter-norms can give them no guidance and cannot cause them to do anything. The reasons for or causes of actions must lie elsewhere. Interpretations of Norms Norms have to be interpreted. This represents a problem for the analyst, but also shows that the force of norms is limited. Let us return to Mulkay’s 30 Questioning Functionalism Box 3.2 Wittgenstein on rules Wittgenstein’s discussion of rules and following rules has been seen as foundational to STS. Although it is complex, the central point can be seen in a short passage. Wittgenstein asks us to imagine a student who has been taught basic arithmetic. We ask this student to write down a series of numbers starting with zero, adding two each time (0, 2, 4, 6, 8, . . . ). Now we get the pupil to continue . . . beyond 1000 – and he writes 1000, 1004, 1008, 1012. We say to him: “Look what you’ve done!” – He doesn’t understand. We say: “You were meant to add two: look how you began the series!” – He answers: “Yes, isn’t it right? I thought that was how I was meant to do it.” – Or suppose he pointed to the series and said: “But I went on in the same way.” – It would now be no use to say: “But can’t you see . . . ?” – and repeat the old examples and explanations. – In such a case we might say, perhaps: It comes natural to this person to understand our order with our explanations as we should understand the order: “Add 2 up to 1000, 4 up to 2000, 6 up to 3000, and so on.” (Wittgenstein 1958: Paragraph 185) Of course this student can be corrected, and can be taught to apply the rule as we would – there is coercion built into such education – but there is always the possibility of future differences of opinion as to the meaning of the rule. In fact, Wittgenstein says, “no course of action could be determined by a rule, because every course of action can be made out to accord with a rule” (Paragraph 201). Rules do not contain the rules for the scope of their own applicability. Wittgenstein’s problem is an extension of Hume’s problem of induction. A finite number of examples, with a finite amount of explanation, cannot constrain the next unexamined case. The problem of rule following becomes a usefully different problem because it is in the context of actions, and not just observations. There are competing interpretations of Wittgenstein’s writing on this problem. Some take him as posing a skeptical problem and giving a skeptical solution: people come to agreement about the meaning of rules because of prior socialization, and continuing social pressure (Kripke 1982). Others take him as giving an anti-skeptical solution after showing the absurdity of the skeptical position: hence we need to understand rules not as formulas standing apart from their application, but as constituted by their application (Baker and Hacker 1984). Exactly the same debate has arisen within STS (Bloor 1992; Lynch 1992a, 1992b; Kusch 2004). For our purposes here it is not crucial which of these positions is right, either about the interpretation of Wittgenstein or about rules, because both sides agree that expressions of rules do not determine their applications. Questioning Functionalism 31 example of the Velikovsky case. The example was originally used to show that scientists violated norms when a higher norm was at stake. Mulkay later noticed that, depending on which parts of the context one attends to, the norms can be interpreted as having been violated or not. It could be argued that the kind of qualitative, documentary evidence used by Velikovsky had been shown time and time again to be totally unreliable as a basis for impersonal scientific analysis and that to treat this kind of pseudoscience seriously was to put the whole scientific enterprise in jeopardy. In this way scientists could argue that their response to Velikovsky was an expression of organized skepticism and an attempt to safeguard universalistic criteria of scientific adequacy. (Mulkay 1980: 112) The problem points to a more general problem about following rules (Box 3.2): Behaviors can be interpreted as following or not following the norms. We can explain almost any scientific episode as one of adherence to Mertonian norms, or as one of the violation of those norms. Thus, in a further way, they are analytically weak. As in the case of counter-norms, the problem is not just a methodological one. If we as onlookers can interpret the actions of scientists as either in conformity to or in violation of the norms, so can the participants themselves. But that simply means that the norms do not constrain scientists. By creatively selecting contexts, any scientist can use the norms to justify almost any action. And if norms do not represent constraints, then they do no scientific work. Norms as Resources Recognizing that norms can be interpreted flexibly suggests that we study not how norms work, but how they are used. That is, in the course of explaining and criticizing actions, scientists invoke norms – such as the Mertonian norms, but in principle an indefinite number of others. For example, because of his refusal to accept the truth of quantum mechanics, Albert Einstein is often seen as becoming conservative as he grew older; being “conservative” clearly violates the norm of disinterestedness (Kaiser 1994). Einstein is so labeled in order to understand how the same person who revolutionized accepted notions of space and time could later reject a theory because it challenged accepted notions of causality: otherwise how could the twentieth century’s epitome of the scientific genius make such a mistake? Implicit in the charge, however, is an assumption that Einstein was wrong to reject quantum mechanics, an assumption that quantum mechanics is obviously right, whatever the difficulties that some 32 Questioning Functionalism people have with it. Werner Heisenberg, one of the participants in the debate, discounted Einstein’s positions by claiming that they were produced by closedminded dogmatism and old age. If we believe Heisenberg, we can safely ignore critics of quantum mechanics. How are norms serving as resources in this case? They are being used to help eliminate conflicting views: because Einstein’s opposition to quantum mechanics violated norms of conduct, we do not have to pay much attention to his arguments. Whether a theory stands or falls depends upon the strengths of the arguments put forward for and against it (it also depends upon the theory’s usefulness, upon the strengths of the alternatives, and so on). However, it is rarely simple to evaluate important and real theories, and so complex arguments are crucial to science and to scientific beliefs. Norms of behavior can play a role, if they are used to diminish the importance of some arguments and increase the importance of others. Supporters of quantum mechanics are apt to see Einstein as a conservative in his later years. Opponents of quantum mechanics are apt to see him as maintaining a youthful skepticism throughout his life (Fine 1986). Norms are ideals, and like all ideals, they do not apply straightforwardly to concrete cases. People with different interests and different perspectives will apply norms differently. We are led, then, from seeing norms as constraining actions to seeing them as rhetorical resources. This is one of many parallel shifts of focus in STS, of which we will see more in later chapters. For the most part, these are changes from more structure-centered perspectives to more agent- or action-centered perspectives. This is not to say that there is one simple theoretical maneuver that characterizes STS, but that the field has found some shifts from structure- to action-centered perspectives to be particularly valuable. Boundary Work The study of “boundary work” is one approach to seeing norms as resources (Gieryn 1999). When issues of epistemic authority, the authority to make respected claims, arise, people attempt to draw boundaries. To have authority on any contentious issue requires that at least some other people do not have it. The study of boundary work is a localized, historical, or antifoundational approach to understanding authority (Gieryn 1999). For example, some people might argue that science gets its epistemic authority from its rationality, its connection to nature, or its connection to technology or policy. We can see those connections, though, as products of boundary work: Science is rational because of successful efforts to define it in terms of rationality; science is connected to nature because it has acquired authority to determine what nature is; and scientists connect their work to the benefits Questioning Functionalism 33 Box 3.3 Cyril Burt, from hero to fraud Sir Cyril Burt (1883–1971) was one of the most eminent psychologists of the twentieth century, and knighted for his contributions to psychology and to public policy. Burt was known for his strong data and arguments supporting hereditarianism (nature) over environmentalism (nurture) about intelligence. After his death, opponents of hereditarianism pointed out that his findings were curiously consistent over the years. In 1976 in The Times, a medical journalist, Oliver Gillie, accused Burt of falsifying data, inventing studies and even co-workers. This public accusation of fraud against one of the discipline’s most noted figures posed a challenge to the authority of the psychology itself (Gieryn and Figert 1986). Early on, his supporters represented Burt as occasionally sloppy, but insisted that there was no evidence of fraud. Burt’s work was difficult, they argued, and it was therefore understandable that he made some mistakes. No psychologist’s work would be immune from criticism. In addition, Burt was an “impish” character, explaining his invention of colleagues. These responses construed Burt’s work as scientific, but science as imperfect. That is, psychologists drew boundaries that accepted minor flaws in science, and thus allowed a flawed character to be one of their own. In addition to denying or minimizing the accusations, responses by psychologists involved charging Gillie with acting inappropriately. By publishing his accusations in a newspaper, Gillie had subjected Burt’s work to a trial not by his peers. The public nature of the IQ controversy raised questions about motives: Were environmentalists trumping up or blowing up the accusations to discredit the strongest piece of evidence against them? Psychologists insisted that there be a scientific inquiry into the matter, and endorsed the ongoing research of one of their own, Leslie Hearnshaw, who was working on a biography of Burt. That biography ended up agreeing with the accusers. However, it rescued psychology by banishing Burt, and using the idea that “the truth will out” in science to recover the discipline’s authority. Hearnshaw argued that the fraud was the result of personal crisis, especially late Burt’s in life, and was the result of his acting in a particularly unscientific manner. Most importantly, he argued that Burt was not a real scientist, but was rather an outsider who sometimes did good scientific work: The gifts which made Burt an effective applied psychologist . . . militated against his scientific work. Neither by temperament nor by training was he a scientist. He was overconfident, too much in a hurry, too eager for final results, too ready to adjust and paper-over, to be a good scientist. His work often had the appearance of science, but not always the substance. (Hearnshaw, quoted in Gieryn and Figert 1986: 80) 34 Questioning Functionalism of technology or the urgency of political action in particular situations when they are seeking authority that depends on those connections. Yet those same connections are made carefully, to protect the authority of science, and are countered by boundary work aimed at protecting or expanding the authority of engineers and politicians (Jasanoff 1987). Boundary work is a concept with broad applicability. Norms are not the only resources that can be used to stabilize or destabilize boundaries. Organizations can help to further goals while maintaining the integrity of established boundaries (Guston 1999b; Moore 1996). Boundary work can be routine, occurring when there are no immediate conflicts on the horizon (Kleinman and Kinchy 2003; Mellor 2003). Examples, people, methods, and qualifications are all used in the practical and never-ending work of charting boundaries. Textbooks, courses, and museum exhibits, for example, can establish maps of fields simply through the topics and examples that they represent (Gieryn 1996). In fact, little does not participate in some sort of boundary work, since every particular statement contributes to a picture of the space of allowable statements. The Place of Norms in Science? The failings of Merton’s functionalist picture of science are instructive. Merton can be seen as asking what science needs to be like, as a social activity, in order for it to best provide certified knowledge. His four norms provide an elegant solution to that problem, and a plausible solution in that they are professed standards of scientific behavior. Nonetheless, these norms do not seem to describe the behavior of scientists, unless the framework is interpreted very flexibly. But if it is interpreted flexibly then it ceases to do real analytic or explanatory work. Going a little deeper, critics have also challenged the idea that science is a unified institution organized around a single goal or even a set of goals. Instead, the sciences and individual scientific institutions are contested – by governments, corporations, publics, and scientists themselves. Does the idea of an overarching goal, for an entity as large and diffuse as science, even make sense? Could an overarching goal for science have any effect on the actions of individual scientists? As a result of these arguments, critics suggest that science is better understood as the combined product of scientists acting to pursue their own goals. Merton’s norms, then, are ideological resources, available to scientific actors for their own purposes. They serve, combined with formalist epistemologies, as something like an “organizational myth” of science (Fuchs 1993). Questioning Functionalism 35 Still, we can ask how ideologies like Merton’s norms affect science as a whole. It may be that their repeated invocation leads to their having real effects on the shape of scientific behavior: they are used to hold scientists accountable, even if their use is flexible. We might expect, for example, that the repeated demand for universalism will lead to some types of discrimination being unacceptable – shaping the ethics of science. Along with other values, Merton’s norms may contribute to what Lorraine Daston (1995) calls the “moral economy” of science. While science as a whole may not have institutional goals, combined actions of individual scientists might shape science to look as though it has goals (Hull 1988). Even though boundaries, in this case boundaries of acceptable behavior, are constructed, they can have real effects. 4 Stratification and Discrimination An Efficient Meritocracy or an Inefficient Old Boy’s Network? Of the 55 Nobel Laureates working in the United States in 1963, a full 34 of them had studied or collaborated with ...
Purchase answer to see full attachment
User generated content is uploaded by users for the purposes of learning and should be used following Studypool's honor code & terms of service.

Explanation & Answer

View attached explanation and answer. Let me know if you have any questions.

1

Feminist Equity Studies and Feminist Analyses

Name
Institution
Course
Professor
Date

2
Feminist Equity Studies and Feminist Analyses
Based on Science, Technology, Engineering, and Mathematics (STEM), feminist
equity studies play a huge role. The studies assess why women are less participative in these
areas (Sismondo, 2010). It is known that women encounter barriers to accessing this
education, making it harder for them to build their careers and improve their status.
Researchers agree that it is unjust and discrimination, founded on the belief that females
cannot excel in these fields. The studies desire to identify the specific barriers that contribute
to the discrimination. Also, they assess the effects of this situation concerning...


Anonymous
Really useful study material!

Studypool
4.7
Trustpilot
4.5
Sitejabber
4.4

Similar Content

Related Tags