Physical metaphors for the mental lexicon
M. T. Turvey and Miguel A. Moreno
University of Connecticut and Haskins Laboratories
A variety of metaphors inspired by contemporary developments and issues
in physics are identified as potentially helpful to theory and experiments
directed at the mental lexicon. The developments are very much in respect
to systems regarded as complex from the perspective of established physical
explanation. The issues are primarily those associated with the context dependencies of properties and functions broadly evident in natural systems at
both macroscopic and microscopic scales. Ideally, the metaphors may bring
new questions, methods, principles, and formalisms to bear on the investigation of the mental lexicon. Minimally, they should enhance appreciation for
the scientific challenges posed by the mental lexicon’s diverse structures and
functions.
.
The dictionary metaphor and substance ontology
An influential intuition about the mental lexicon, one that constrains both theory and experiments, is that it is much like a dictionary. Adopting the dictionary metaphor, one is inclined (see Elman, 2005) to view words as:
1. discrete, context free (structured) symbols
2. five separate categories of information, namely, orthographic, phonological, morphological, syntactical, semantic
3. static representations
4. objects of processing (operands)
5. building blocks
Additionally, in adopting the dictionary metaphor one is inclined to presume
a dictionary algorithm (a well-defined search procedure akin to alphabetical
order, e.g., word frequency) and a name-to-notion organization of the typical dictionary entry (that is, from the orthographic form and/or phonological
form to meaning).
The Mental Lexicon 1:1 (2006), 7–33.
issn 1871–1340 / e-issn 1871–1375 © John Benjamins Publishing Company
8
M. T. Turvey and Miguel A. Moreno
Hand-in-hand with the dictionary metaphor is an intuitive orientation toward explaining how the mental lexicon works that is continuous with the 19th
century mechanical perspective on nature: explanation of everything must be by
means of unalterable objects and the simple forces (attraction, repulsion) acting
between them (Cassirer, 1950; Einstein & Infeld, 1938/1966). The unalterable
objects of the mental lexicon are the static representations. The simple forces are
the processes that activate, inhibit, and combine, the static representations.
Historically, the broad application of the mechanical view rested on the
interpretation of ‘unalterable objects’. In respect to a clock, the ‘unalterable
objects’ would be the clock’s components. As mechanical units, they are unchanged by the causal processes in which they participate. Moreover, they
persist when those processes stop and when the system to which they belong,
namely the clock, is taken apart. In the broad application of the 19th century
mechanistic view that sought to embrace light, heat, electricity and magnetism, the ‘unalterable object’ qua mechanical unit was a substance. The notion
of substance was roughly something that makes a thing what it is, that gives a
thing its essential nature. In the inventory of word properties inspired by the
dictionary metaphor, the unalterable substances are the unchanging word representations with the essential nature of each being its specific set of categories
of information.
In many ways substance ontology is a prominent feature of contemporary efforts to ground the mental lexicon in the central nervous system. Inferences are
often drawn within cognitive neuroscience that different classes of words, and
different operations performable upon words, written and spoken, are tied to
different localized regions of the brain, that is, different material components.
In respect to such contemporary efforts it is significant to note that dramatic
changes in physical theory around the cusp of the 20th century were primed
by the realization that the mechanical perspective could be implemented consistently only by the continual invention of novel substances — new kinds of
material agents with their own special material properties (Einstein & Infeld,
1938/1966).
The response to substance ontology’s unwelcome proliferation of causal
agents has taken several forms within physics that provide us with several
different ways, albeit rough and approximate, for conceptualizing the mental
lexicon. The various opportunities for new metaphors that we present, though
different, are not fully distinct. Their shared foci are how to address (a) systems
that are deservedly labeled complex when viewed from the vantage point of
established physical explanation, and (b) the pervasive context dependence of
properties and functions of natural systems at all length and time scales.
Physical Metaphors
2. Lexicon as a self-organizing, self-regulating system
The metaphor of the lexicon as a natural process of self-organization and selfregulation invites thinking about the lexicon in terms of macroscopic physical
systems, specifically those that typically fall under the label excitable media.
Characteristic of such systems is a kind of universality or multiple realizability.
Very different microscopic components manifest the same macroscopic patterns. In the example that follows the components are molecules. Without loss
of generality, the components could be amoeba (Goodwin, 1994).
An unexpected observation made in chemistry in the middle of the 20th
century provides a useful instance. When certain chemicals are mixed in a Petri dish (a very shallow, flat dish) and left alone, spatial-temporal patterns of
exquisite beauty emerge spontaneously. A rich mixture of organic and inorganic (that is, carbon-less) chemicals produces concentric rings that propagate
outwards from centers that emerge spontaneously throughout the Petri dish.
The emergent pattern is depicted in Figure 1. The concentric rings are formed
at regular intervals, with an encounter between any two or more rings leading
to mutual annihilation (unlike the ripples on a pond originating from different sites that pass through each other). The repetitive pattern-formation, pattern-degradation process just described is known as the Belousov-Zhabotinsky
(B-Z) reaction (e.g., Winfree, 1987). The originator (Belousov) could not publish this discovery simply because his contemporaries in the middle of the 20th
century deemed it to be impossible. It did not fit the fundamental assumption
of matter as inert — that matter cannot self-cause, cannot self-complex. More
specifically, it went against the apparently obvious fact that diffusion (random
motion of particles) combined with any sequence of reactions could not lead to
anything other than homogeneity (Shinbrot & Muzzio, 2001).
A→X
B+X→Y+Q
X→P
Y + 2X → 3X
Figure . The Belousov-Zhabotinski (B-Z) reaction.
9
0
M. T. Turvey and Miguel A. Moreno
2. Autocatalysis
There are two key processes in the B-Z reaction. The first is a process by which
a chemical stimulates its own production — a positive feedback effect or autocatalysis. The second process is that the autocatalysis of a substance is balanced
by the rate at which an inhibiting chemical is produced and the mixture settles
on a steady state. But the complexity of the B-Z recipe is such that production
and inhibition of the chemical in the first process cannot balance and the mixture oscillates between their competing effects.
Some aspects of the B-Z reaction are approximated by the equations identified in Figure 1 (so-called Brusselator equations) where A and B are reactants,
P and Q are products, and X and Y are intermediaries. The equations capture
the fact that when put into a beaker and stirred constantly, the liquid turns
spontaneously into one color (say, red) and then into another color (say, blue)
and does so in a continuous cyclic fashion. The autocatalytic process is expressed by the bottom equation of the four shown in Figure 1. It is an intermediary process that takes some amount of X and makes more of it. This very
A, B
Selection and
competition prune
less efficient
parts and links
P, Q
Growth and centripetal
amassing of energy and
material amplify
overall activity
Figure 2. Schematic of autocatalysis. Dots are components and arrows are processes
by which components influence each other. As autocatalysis evolves, level of activity
increases (arrows thicken) and processes most participatory in the autocatalysis are
selected and grow at the expense of those less participatory. [Adapted from Figure 3.8
in Ulanowitz, R. E. (1997). Ecology, the ascendent perspective. New York: Columbia
University Press.]
Physical Metaphors
general process of autocatalysis drives the interdependence of the parts and
produces their collective behavior (red liquid, blue liquid).
Figure 2 is a succinct summary of how autocatalysis, conceived in its most
general form, grows the system and tunes the system. In the growing of the system and the tuning of the system it seems to act in the capacity of formal cause
(Juarrero, 1999). The macroscopic relations defining a B-Z pattern (as system)
persist unchanged even though the particular collection of molecular components constituting the pattern at any one time is continuously changing.
2.2 Micro–macro mutualism
Figure 3 is complementary to Figure 2. It portrays how the molecular components that constitute the liquid cohere to form a distributed emergent pattern
that acts to constrain, in continuous fashion, the behavior of the molecular
components. This co-mutation (Kugler & Turvey, 1987, 1988) is typical of the
self-organizing phenomena of excitable media: the whole inherits its properties
from the parts that compose it and the parts inherit their (interactional) properties from the whole that comprises them. Analogous processes are suggested
within the psychology of language. For example, one can note (e.g., Langacker,
1987) the seemingly joint emergence of word meanings and sentence meaning
molecular
components
emergent distributed
whole
entail
entails
Interactions among
parts
Figure 3. Micro–macro mutualism in the B-Z reaction.
2
M. T. Turvey and Miguel A. Moreno
emergent
sentence meaning
entails
entail
interacting
word meanings
low-dimensional
phonology
entail
entails
high-dimensional
articulation/acoustic
Figure 4. Micro–macro mutualism in language processes. Upper figure: Word meanings and sentence meanings emerge jointly. Lower figure: Mutuality of (physical)
phonetics and (cognitive) phonology.
(Figure 4 Upper). A further analogous process is implied by theoretical efforts
(e.g., Browman & Goldstein, 1995) aimed to address the gap between the categorical (discrete) distinctions among words that formal phonology addresses
and the gradient (continuous) distinctions among words that are the subject
matter of phonetics. Traditionally, cognitive phonology is held distinct from
physical phonetics. A major proposal for dissolving this phonology-phonetics
divide assumes a mutuality (see Figure 4 Lower) of the categorical distinctions
(analogous to the stable B-Z patterns) and the gradient distinctions (analogous
to the graded continuous motions of the B-Z components).
2.3 Control parameters and new qualitative states
If the Petri dish is tilted very slightly and very slowly, the dynamic pattern of
concentric circles generated in the B-Z reaction will at some degree of tilt transform to a spiral. The tilting is a particular kind of parameter, often referred to as
a control parameter, that moves the chemical mixture through different stable
states manifest as different dynamic patterns.
Control parameters in self-organizing phenomena are typically quantities
that can be continuously scaled up or down, such as tilt in the above example.
Physical Metaphors
At critical magnitudes of the control parameter, the pattern changes. Outside
those values, the extant pattern is invariant to the parameter’s continuous
changes. The critical values are points at which the extant pattern becomes
unstable and gives way to a new pattern that is stable under the new prevailing
conditions.
Control parameters are at some logical distance from the discontinuous
patterns they give rise to. They are patently not resemblances of, containers of,
or codes for, the patterns. Tilt affects the chemical reactions but, clearly, it is
not an ingredient or integral part of those reactions. For the general case control parameters are subtle and non-obvious and identifying them and defining
their link to the patterns that they enable is challenging. For the general case,
a context sufficient for producing a self-organized dynamic pattern includes
many necessary conditions that are so indirectly related to the pattern that they
go unnoticed and unnamed. Because they are so normal, so commonplace,
they get taken for granted. As we have just noted, for the self-assembly of concentric rings in the B-Z reaction depicted in Figure 1 a necessary condition is
that the Petri dish be horizontal.
Consider the following non-obvious influences on highly stereotypic perception-action abilities often labeled as ‘unlearned’ or ‘innate’. Chicks handle
mealworms in a species-typical fashion. If they are prevented from seeing their
toes during the first two days after hatching, they do not pick up mealworms
and do not eat them in the standard ways. Rather, they simply stare at them
(Wallman, 1979). Squirrel monkeys fed during rearing with live insects develop
the species-specific avoidance behavior toward snakes. If they are instead fed
fruit or chow, the fear of snakes fails to emerge (Masataka, 1994). It seems that
there is an experiential context, a possibly dense network of experienced contingencies, which entails the emergence of stereotypic mealworm eating and
stereotypic snake avoidance. The stereotypic behaviors fail to emerge when aspects of the context — however subtle, however disingenuous — are omitted.
The potential significance of the concept of control parameter to the mental lexicon is threefold. First, it allows the hypothesis that the spectrum of
growth-in-word-knowledge trajectories could arise from a single function or
a few functions (see van Geert, 1995). Second, it allows the hypothesis that
continuous changes in one control parameter or a few control parameters lead
the growth of word knowledge through discontinuous, qualitatively distinct
phases. Third, it allows the hypothesis that there are non-obvious, non-lexical
contributors to lexical formation whose omissions are likely to eventuate in
substandard word knowledge.
3
4
M. T. Turvey and Miguel A. Moreno
For many instances of self-ordering phenomena the control parameter is
number, precisely, the number of things that are interacting (Kugler & Turvey,
1987). Order emerges at critical sizes. In part, this is because the nonlinear interactions or constraints within the ensemble are soft molded and weak (rather
than hard molded and strong) with the consequence that they affect the structure at the ensemble level only when the number of them is sufficiently large.
By analogy, we can expect the orderliness of the mental lexicon to be a function
of the number of its constituent words. Further, the weak constraints governing
the lexicon’s order — the interactions between words — are likely to occur on
an increasing number of dimensions as ever more co-occurrences and contextual embeddings are encountered. Arguably, weak interactions among multiple
lexical items on multiple dimensions provide the engine for the surprisingly
rapid growth (10–15 new words per day) in a schoolchild’s word knowledge
(Landauer & Dumais, 1997).
3. Lexicon as inherently impredicative
A potentially telling feature of a real dictionary (e.g., the Oxford or Webster’s
for English) is that it defines words through other words. Take a definition and
replace each word by its definition. Then for each of those definitions replace
each word in the definition by its definition, and so on. At some point in this
process the original word is likely to reoccur. The implication is that definitions in a dictionary are circular (Décary & Lapalme, 1990). Given this fact of
a standard dictionary, one can imagine asking whether a dictionary could be
purposely constructed (as opposed to autonomously evolved) based on definitions that are not self-referring.
These brief remarks on the ordinary dictionary bear, perhaps surprisingly,
on the important challenge of identifying in what precise sense a system is
complex rather than simple. A modern analysis equates complex with the necessity of circular definitions in a formal model of a natural system’s causal
entailments (Rosen, 1991, 2000). This difficult notion, interpreted below, is the
basis for a second metaphor. One promise of this metaphor is an appreciation
that words and their linguistic functions may be context dependent in a primary, foundational sense rather than a secondary, derivative sense.
Traditionally treatments of the notions of simple systems and complex
systems have presumed a gradient of complexity (e.g., von Neumann, 1966).
A system is more complex to the extent that it has more distinguishable components and more interactions constraining them. Movement up the gradient
Physical Metaphors
(toward greater complexity) is by accretion and movement down the gradient (toward greater simplicity) is by deletion. An important presumption has
been that adding a simpler system into a larger and, thereby, more complex
system does not change the simpler system’s material basis, does not change its
substance. Movement on the gradient toward more complexity preserves the
context independence of the simpler component systems.
The requirement of context independence of components means that, formally, each must be fully predicative. In respect to building a system of logic
or mathematics, predicativity identifies a strategy in which (a) all axioms or
propositions are constructed from definitions that are context-independent
and (b) all semantic aspects are converted systematically to syntactic forms.
The strategy, in theory, is applicable to the building of dictionaries.
The understanding of predicativity is advanced by consideration of its opposite. As indicated in Figure 5, a non-predicative definition amounts to referring to the whole when defining the parts that will compose the whole or referring to the system when defining the subsystems that will compose the system.
In Kleene’s (1950, p. 42) words, impredicativity expresses the fact that “[w]hat
is defined participates in its own definition.” The prescription then for achieving predicativity is seemingly straightforward: Disallow any set S that contains
members m definable only in terms of S, or members m involving or presupposing
S. This guideline is the Vicious Circle Principle promoted by Russell, Poincaré,
SYSTEM
FUNCTION
subsystems
parts
The impredicative direction of definition (understanding,
explanation, entailment).
The predicative direction of definition (understanding,
explanation, entailment).
Figure 5. Comparison between impredicative and predicative definitions.
5
6
M. T. Turvey and Miguel A. Moreno
and Hilbert in the early part of the 20th century. Its recommendation is simple:
‘prohibit impredicative loops’. By so doing one could ensure reasoning without contradiction, one could guarantee a formal system without ambiguity. For
Rosen (1991, 2000), the primary lesson of Gödel’s incompleteness theorem is
the fundamental inability to satisfy the recommendation: not all the self-referring impredicative loops of a formal system are removable. The theorem suggests that, rather than being generic, logical or mathematical systems that are
fully predicative are special or non-generic. The important generalization for
Rosen is that systems describable strictly in predicative terms, systems that are
thereby computable or simulable, are rare.
The contrast between predicative and impredicative and its implications
are brought into focus by the set theoretic comparisons made in Figure 6. Conventional sets satisfying the Foundation Axiom (Figure 6, left) and, thereby,
predicativity, are represented by graphs that contain no loops (Aczel, 1988;
Barwise & Etchemendy, 1987). Hypersets satisfying the Anti-Foundation Axiom (Figure 6, right) and, thereby, impredicativity, can include loops in their
graphic representations (Aczel, 1988; Barwise & Etchemendy, 1987). In Figure 6 (right), the B element in the set defining A includes A in its definition.
We now have, in Figure 6, a working interpretation of Rosen complexity
(Kercel, 2003). Any natural system whose causal entailments can be representA set can include itself
Circularity (impredicativity)
No set can include itself
No circularity (predicativity)
C
C = {A, B}
A = {X, Y}
B = {Y, A}
A
X
C
B
Y
Reasoning without contradiction:
permits full recursion,
proscribes ambiguity,
fits artificial languages.
A
X
B
C = {A, B}
A = {X, Y, B}
B = {Y, A}
Y
What is defined participates in its own
definition: proscribes full recursion,
permits ambiguity, fits natural
languages.
Figure 6. Comparison between the Foundation Axiom and sets (left) and the AntiFoundation Axiom and hypersets (right).
Physical Metaphors
ed formerly by a set diagram without loops is a simple system. Conversely, any
natural system whose causal entailments can only be represented formerly by
a set diagram that contains loops is a complex system. On this image, the context-dependence and self-reflexive interpretation of many words, if not most
words, implies a diagram with loops and, thereby, an understanding of the
mental lexicon as complex in Rosen’s impredicative sense.
The special relevance of the foregoing conclusion is the emphasis it gives to
developing accounts of lexical organization and performance through the tools
of Aczel’s (1988) hyperset theory. The broader significance of hypersets for the
formal study of cognition is highlighted by Barwise (Aczel, 1988, p. xii): “It
seemed that in order to understand common knowledge (a crucial feature of
communication), circular propositions, various aspects of perceptual knowledge and self awareness…we either had to give up the tools of set theory which
are so well loved in mathematical logic, or we had to enrich the conception of
set, finding one that admits of circular sets, at least.”
3. Categories of causal entailment are intertwined and mutable
Inquiry into the defining nature of complex systems has suggested a subtle
feature that may enrich the self-organization metaphor’s value. As depicted in
Figure 7, the classical (Aristotelian) categories of causal entailment can be independently segregated from one another in Newton’s formalism for motion
and change (Rosen, 1991). The initial conditions are the material cause, force
is the efficient cause, and the particle’s mass is the formal cause (it ‘personalizes’ the particle’s motion, whatever the initial conditions and whatever the
force). Rosen’s (1991, 2000) analysis of complex suggests that this straightforward segregation and independence of causal categories is likely to be absent
in self-organizing systems. A further characteristic of systems that are complex
as opposed to simple is that their categories of entailment are intertwined and
efficient
F(x0, x0)
formal
m
material
x0, x0
Figure 7. Segregation of categories of causal entailment in Newton’s formalism.
7
8
M. T. Turvey and Miguel A. Moreno
mutable, often interchanging in novel ways (Rosen, 1987). That such might be
the case for endogenous systems — those that make themselves up as they go
along (Kercel, 2003) — was anticipated more than two centuries ago in Kant’s
The Critique of Judgment (1790/2000, Sections 64–66).
For Kant, biological things are not machines. Machines neither produce
nor reproduce themselves. For any given thing that we recognize as a machine,
for example, a clock or an airplane, the following characteristics seem to hold.
The parts of the thing exist for, but not by means of, each other. The parts
act together to meet the thing’s purpose; their actions, however, have nothing
to do with the thing’s construction. The thing and its parts rely significantly
upon causal entailments arising from outside themselves for their origin and
function.
In contrast, for any given biological thing, for example, a fly or a tree, the
following characteristics seem to hold. The parts of the thing are both causes
and effects of the thing; they are not only the means but also the ends. The parts
construct and maintain themselves as a unity, each existing by virtue of, and for
the sake of, the others and the whole. The thing and its parts are themselves the
source of the causes for their origin and function.
In the course of a tree’s self-assembly or the self-assembly of a B-Z pattern, causal categories mutate and intertwine. The categories of the lexicon, like
the causal categories of a self-assembling system, are known to mutate. Nouns
become verbs (paper → to paper, [as in, he papered the kitchen]), and prepositions/adverbs become verbs (up → to up [as in, she upped the price]). Further, in
terms of coarser divisions, debates flourish on the separateness of word knowledge (the category lexicon) and rule knowledge (the category grammar). Like
material and formal causes, they can be seen as unequal, but adoption of the
Rosen complexity metaphor allows that they may be intertwined. That is, the
categories lexicon and grammar can be functionally distinct but inseparable
(as advocated, perhaps, by a lexicalist account, e.g., Bates & Goodman, 1997).
4. Words as quantum compatible objects
Whereas an apparently sound working assumption for science, from its earliest beginnings to the present, would be that to measure something is to gain
knowledge about a preexisting state, quantum mechanics teaches that some
things have no specific states until they are measured. This central lesson of
quantum mechanics is not that measuring can affect things that are measured.
That is, the lesson is not that measuring perturbs a particular thing from a defi-
Physical Metaphors
nite but unknown state (that it happens to be in) to some other state. Rather,
quantum mechanics teaches that, for some things, measurement gives definition to a thing’s nature that was indefinite prior to measurement. A previously
indeterminate thing is forced (as it were) by measurement to assume a definite
appearance (Lindley, 1996).
Quantum mechanics is a powerful theory of the micro-world. But it
is much more than that. It is a collection of ideas that address issues of (a)
context specificity of states of affairs and (b) emergence of new properties
from conjunctions of states of affairs. Issues of context and conjunction are
not necessarily unique to the micro-world. They are certainly not foreign to
the student of the mental lexicon. The proposal in this section is that certain
benefits accrue from treating words as compatible with quantum principles —
that is, treating them as quantum objects (Gabora & Aerts, 2002). The metaphor
does not motivate generalizing the physics of quantum mechanics but, rather,
generalizing the lessons and logic of quantum observations. The promise is a
formalism to accommodate issues surrounding word comprehension in the
context of other words and in the contexts provided by environmental and
social settings.
4. Word as potentiality actualized by context: Analogues to the
measurement problem and entanglement
The well-known uncertainty principle of quantum mechanics crystallizes a
class of observations in which a thing’s nature is defined by the system composed of the thing and the measure performed upon it. It is an instance of the
context specificity identified above. Change the whole of which the thing is a
part (change the measurement system) and the thing’s nature changes. Figure 8
(left) schematizes the familiar example in quantum mechanics of measuring
the position and velocity of an indefinite particle. The propertied thing defined (impredicatively) by the position-measuring system P is not the same
propertied thing as that defined (impredicatively) by the velocity-measuring
system V. The uncertainty principle is perhaps better stated as the principle of
non-commutativity of observations (Birkhoff & von Neumann, 1936). It is an
assertion that most pairs of observations are incompatible and cannot be made
on a given thing simultaneously.
The familiar example of Figure 8 (left) captures the so-called measurement
problem. For our purposes the problem can be generalized to any circumstance
in which the context is very different in kind from the indefinite thing with
which it forms a system. A typical circumstance is schematized in Figure 8
9
M. T. Turvey and Miguel A. Moreno
river environment
position
meter
“river” system
indefinite
word
indefinite
thing
velocity
meter
V-system
P-system
20
bank
“town” system
town environment
Figure 8. The generalized uncertainty principle (or the principle of non-commutative
observations).
(right). In the environmental context of a river, the river and the indefinite
word bank form a system, the river system, which ascribes to bank the notion
of the rising ground bordering the river. In the environmental context of a
town, the town and the indefinite word bank form a system, the town system,
which ascribes to bank the notion of an establishment for the custody, loan,
exchange or issue of money.
In the micro-world, when quantum objects combine they do not remain
separate but instead enter into a state of entanglement. The emergent entangled
state has new properties. In the conjunction of quantum objects A and B, A is
the context for B and vice versa. But unlike the situation described for the measurement problem, the context A is of like kind to the thing B that it contextualizes. In a typical example from the domain of micro-physical phenomena, A
and B are particles of the same type (e.g., both are electrons).
Figure 9 presents examples of the emergence from conjunction. The focus
in Figure 9 is upon a word in combination with another word (or other words)
in comparison to the focus in Figure 8 that was (intended to be) upon a word
in an environmental setting. In the example of the past tense of the word bake
(Elman, 2004), the word forms a change-by-heating system when in combination with Jim and potato and a creation-through-application-of-a-recipe system when in combination with the words Jim and cake. The meaning of bake
is emergent.
The storyline is similar in the example of the word long. Its emergent connotation is negative (arduous, annoying) in combination with the word day
and positive (fulfilling, experientially rich) in combination with the word life.
A slow-passage-of-time system is formed by the conjunction of long and day
and a many-years system is formed by the conjunction of long and life.
Physical Metaphors
Jim ____ a potato
“change” system
indefinite
word
baked
“create” system
Jim _____ a cake
____ year
“slow” system
indefinite
word
long
“many” system
_____ life
Figure 9. Meaning emergence from word conjunctions expressed in terms of the
generalized uncertainty principle.
The distinguishing feature of an entangled state, say of two things A and
B, is that it cannot be factored, it cannot be construed as a composite of two
individual states, in the sense classically understood in physics and logic. The
metaphor of an entangled state for a two-words combination implies that the
meaning of the combination is not the combination of the individual meanings. In an ordinary dictionary, the entry pet does not include the notion of
‘lives in cage’ and the entry bird does not include the notion of ‘talks’ but both
notions are at play in the combination pet bird. Likewise, in the typical lexicon
of the typical speaker, guppy is a poor example of pet and a poor example of fish
but it could be a good example of pet fish. The so-called guppy effect may be a
particularly compelling lexical analogue of quantum entanglement (Gabora &
Aerts, 2002).
Entanglement suggests a primary conceptual shift from the dictionary algorithm of the dictionary metaphor. The mental implementation of such an
algorithm implies a metric that defines a distance between words in the internal lexicon (much as the distance between words in an ordinary dictionary is
2
22
M. T. Turvey and Miguel A. Moreno
defined by their alphabetic spelling). In sharp contrast, entanglement implies
that words are intertwined rather than distanced from each other (Gabora
& Aerts, 2002). The details of this intertwining or interweaving of words-asquantum objects are only visible in the patterns of context dependencies that
the words exhibit. The notion of closeness between words is context dependent.
For example, the word egg is proximate to sun in the context of “sunny side up”
but distal from sun in the context of “scrambled” (Gabora & Aerts, 2000). To
repeat, as a quantum object a word is a potentiality in need of a context to be
made actual. If one wished to describe it in a context-independent manner it
would be as a superposition of all of the word’s possible context-specific senses.
The conventional notion of distance in lexical space loses theoretical value in
the face of the entanglement-motivated thesis that the ‘collapse’ of a word’s
superposed senses to an actualized sense can be induced by any subtle correlation between word and context.
4.2 A generalized quantum formalism?
As remarked, the metaphor implies that the formal analysis of quantum phenomena provides a guide to the construction of methods for accommodating
the potentiality and context dependence of the mental lexicon. A theory of
words developed in lattice structures or Hilbert space, that exploits the conventional procedure in quantum mechanics for describing combinations of quantum objects, promises to model minimal combinations of the kind pet bird, pet
fish and, on elaboration, any arbitrary combination of multiple words (Aerts &
Gabora, 2005a, 2005b). Given the current level of conceptualization, a general
quantum-formalism of a word would involve an entire structure consisting of
three sets (all possible senses, all possible contexts, and all their property relations) and two functions (a probability function for the transformation from
any one sense in any one context to any other sense in any other context, and a
weighting function applied to properties, given a conjunction of sense and context). It goes without saying that implementing the formalism rests on quantitative analyses of large corpora of words. Such analyses are underway in varied
forms (e.g., Aerts & Czachor, 2004; Landauer, 2002; Lund & Burgess, 1996).
4.3 A need for quantum logic?
Investigations into how computers might learn the meanings of words by reading large amounts of text have pointed to the potential relevance of quantum
logic. In its original form, as crafted by Birkhoff and Von Neumann (1936),
Physical Metaphors
where standard logic is founded on sets and subsets, quantum logic is founded
on vector spaces and subspaces. The promise of search engines endowed with
the logical connectives of quantum logic is an extension of their capability beyond word matching to comparing word meanings (Widdows, 2003; Widdows
& Peters, 2003).
One strategy for learning meanings involves mapping words to points in
a high-dimensional space — a vector space — by recording the frequency in
texts with which any one word co-occurs with any other word (e.g., Schütze,
1998). The distribution of co-occurrences between a word and some set of content-bearing terms connects words with similar meanings, providing a profile of the word’s usage. One desirable operation on this vector space of words
would be tuning to relevant information by selecting out unwanted information (e.g., the clothing sense of suit dissociated from the legal sense of suit).
Vector negation and disjunction can be combined to perform the operation
for any number of unwanted senses: a NOT (b1 OR … OR bn). For example,
to focus on the geological sense of rock, one needs, rock NOT band, Arkansas,
oscillation… and so on.
We need only remark on vector disjunction to show the linkage of the
preceding operation to quantum observations. The well-known results of the
two-slit experiment with electrons question the starting assumption that each
electron fired from an electron gun at a wall with two holes either goes through
hole A or it goes through hole B (Feynman, Leighton & Sands, 1972). Classical
logic couched in set theory treats the results by the union A∪ B meaning that
if e is an electron that has passed through the wall, then either e ∈ A is true
(e passed through A) or e ∈ B (e passed through B) is true. That neither need
be true is addressed in Birkhoff and Von Neumann’s quantum logic by interpreting the outcomes for A and B as subspaces of a vector space (Widdows &
Peters, 2003). The disjunction is then the vector sum A + B that is larger than
the vector union A∪ B. That is, there can be members of A + B that are in neither A nor B meaning that the question of whether e went though A or B is an
inappropriate question. One cannot demand an answer to “It is true, or it is not
true, that the electron either goes through hole 1 or hole 2?” (Feynman et al.,
1972, p. 37–39). It would be tantamount to asking the question (see Figure 8),
“Which of the meanings of bank is the true meaning?” More broadly, the use
of the quantum logic forms of AND, OR, and NOT to represent ambiguous
words and to remove unwanted meanings, parallels the means by which quantum mechanics addresses the collapse of superposed states to a single state
(Widdows, 2003).
23
M. T. Turvey and Miguel A. Moreno
5. Lexicon as a repertoire of catalysts
Zipf’s Law
Log Rank of Word Frequency
Log Cumulative
Sum of Frequencies
Nonlinear thermodynamic systems operate in a variety of modes. Switching
among modes is typically tied to critical values of a control variable. In the Belousov-Zhabotinsky reaction, tilting the dish beyond a certain degree induces
a switch between two modes of organization, namely, concentric rings and spirals. The Raleigh-Bénard convection instability provides another well-known
example. A fluid in a Petri dish, heated from below and with a constant cooler
temperature maintained from above evolves into hexagonal cells, oscillating
rolls, and finally turbulence, as the heating from below is amplified. For both
examples, the switching of modes is (relatively) energy expensive. However,
when the number of modes is very large, and when they are defined at many
time scales, a more general, lower-energy, and productive means of bringing
about modal shifts is required. This requirement points to a physical language
that is based on a generalization of the process of catalysis — the facilitation
of chemical reactions without adding to or reducing their energetic content.
It has been proposed that in any sufficiently complex system, a language-like
coordination and control capability arises out of whatever physical materials
and processes are available that can be tied together into catalytic operations
(Iberall, 1983; Iberall & Soodak, 1987; Iberall, Soodak, & Hassler, 1978).
The role of these generalized catalysts and the language they compose is
to facilitate changes in organization — transitions between modes — at little
cost. (Physically speaking, catalysts qua words and word combinations are
low-energy operators.) Because a system’s modes are defined at many different
frequency scales, mode switching, and the attendant generalized catalysts, are
likewise defined at multiple frequency scales. For adaptability and persistence,
all the modal processes at all of the time scales should be in an equivalent state
Log Word Frequency
24
Equal lexical power at equal
logarithmic intervals
10–
20
20–
40
1280–
2560
Log Rank Interval
Figure 0. Zipf ’s law presented as a constancy of the logarithm of the cumulative sum
across logarithmic rank intervals.
Physical Metaphors
of readiness. Zipf ’s law of word usage provides the point of entry into the latter notion. In Figure 10 (left) the law is presented in its fairly standard form of
word frequency versus rank of word frequency in double logarithmic coordinates. In Figure 10 (right) the law is presented as the logarithm of the cumulative sum in each logarithmic rank interval. For the three intervals (decades)
shown, it is evident that the cumulative sum is constant. If each word viewed as
a generalized catalyst entails a cost C, then the implication of Figure 10 (right)
is that the readiness requirement is met by distributing the system’s linguistic
power equally over all frequency bands (scaled logarithmically). For a band of
relatively few but very frequent mode switches (generalized catalysts, words)
and a band of relatively many but infrequent mode switches (generalized catalysts, words), the respective magnitudes of ΣC are the same. This log-uniform
distribution of linguistic power is most cost effective from the perspective of
the language-using system. From the perspective of physics, however, it identifies language (paradoxically) as being very close to noise (Iberall & Soodak,
1987).
The physical metaphor of words as generalized catalysts facilitating modal
organizations has a parallel in the contemporary literature on the mental lexicon. Elman (Elman, 1995, p. 207) has suggested that: “Words are not the objects
of processing as much as they are inputs which drive the processor in a more
direct manner.” And, further (Elman, 2004, p. 301) that “[w]ord knowledge
might be instantiated through the word as ‘operator’ rather than as ‘operand’”
Perhaps usefully, the dynamics facilitated by a generalized catalyst provides the
simile for the idea that the properties of a spoken or written word are carried
in the processes driven by the word.
6. Lexicon as self-organized criticality
A function that can be expressed as a straight line in logarithmic coordinates,
a function of the kind shown in Figure 10 (left), is often referred to as a power
law function. It satisfies the equation y = f(x) = xa (assuming, for simplicity, an
intercept of unity). The significant features of a system abiding by power-law
behavior are: (a) it has no characteristic measure (no defining mean), and (b) it
is scale invariant (same dynamics at every scale). To elaborate on (b), let a particular value of x be multiplied by k. Then, f(kx) = kaxa and the relative change
in the dependent variable from x to kx is given by f(kx)/f(x) = kaxa/xa = ka. That
is, the relative change in f(x) is independent of x, independent of scale.
25
Gutenberg-Richter Law
Log Earthquake Size
Log Earthquake Frequency
M. T. Turvey and Miguel A. Moreno
Log Earthquake Frequency
26
Zipfian Form
Log Rank of Earthquake Size
Figure . The Gutenberg-Richter law.
An illuminating power law is that which applies to earthquakes, the Gutenberg-Richter law shown in Figure 11 (left) and shown in its Zipfian form in
Figure 11 (right). To set the stage, there are many more small quakes than
larger quakes and although the energy of an earthquake of magnitude 8 on
the Richter scale is ten million times that of an earthquake of magnitude 1, the
covering power law suggests that the processes responsible for the quakes of
magnitude 8 and magnitude 1 are the same. No special role separates the large
earthquakes from the small. Whatever the theory of earthquakes it must speak
to all sizes and not be limited to just the newsworthy — it must speak equally
to subtle tremors and catastrophic land shifts (Bak, 1996).
What the Gutenberg-Richter law illuminates is the challenge of understanding the principle by which very complicated processes (e.g., those of the
earth’s crust, with all of its geographic and geological features at many spatial scales, and their dynamics at many time scales) can be condensed into an
extremely simple relation (e.g., for earthquakes, that between frequency and
size). From one much-discussed perspective the understanding is that the system as a whole is in a delicately balanced state in which direct interactions
limited to nearest neighbors effectively reach across the entire system (Bak &
Chen, 1991). In this balanced or critical state any given component can affect
any other component.
We can avail ourselves of a model system, the sand pile of Figure 12. Study
of this system (or some facsimile of it) highlights that the balance is between
two processes at distinct time scales and that the separation of time scales is
owing to a threshold (Jensen, 1998). The sand pile grows from pouring sand
onto a region of a flat surface, like a tabletop. Static friction, which sticks the
individual grains together, makes the upward growth possible. As the height
increases and the slopes become steeper, the continuous addition of more sand,
Log Avalanche Frequency
Physical Metaphors
Log Avalanche Size
Figure 2. The sand-pile model of self-organized criticality.
and the consequent increase in slope, overcomes the friction, causing some
grains to move down the pile but not to an extent that the pile can grow no
further. At some point, however, a limit (the ‘threshold’) will be reached. The
amount of sand added to the pile is matched on the average by the amount of
sand that leaves the pile. The sand pile has organized itself into a state in which
mean height and slope are time invariant. In this state, the avalanches (sand
falling off at the edges of the pile) that preserve the constancy of the sand pile
under continued adding of sand occur in varied sizes that order as a power
law of frequency. This outcome, depicted in Figure 12, is of like kind with the
Gutenberg-Richter law. The falling sand can start a chain reaction that can affect any number of grains in the system. Most frequently the avalanches will be
very small, less frequently they will be of a larger moderate size, and even less
frequently they will be close to spanning the entire pile.
The physical system suggested by the sand pile model (and see Jensen,
1998, and Sethna, Dahmen, & Myers, 2001, for qualification and elaboration),
in conjunction with Zipf ’s law for words, invites thinking about the mental
lexicon as a system that has organized itself to the critical state. A word experienced visually or aurally is akin to the adding of a grain of sand. It gives rise to
‘lexical avalanches’. Below the critical state (analogous to a more shallow slope),
the lexical avalanches are very small or, synonymously, the information made
available is severely limited. Above the critical state (analogous to a steeper
slope), the lexical avalanches are very large or, synonymously, the information
made available is excessive. Both subcritical and supracritical lexical organizations will typically evolve (the subcritical will grow, the supracritical will collapse) to the critical, delicately poised, lexical organization. At the critical state,
the lexical avalanches can occur at all sizes. The throughput of information is
the highest attainable.
27
M. T. Turvey and Miguel A. Moreno
B
…
…
A
n
τ1
τ2
τi
τn
interaction
dominant
dynamics
component
dominant
dynamics
Log Power
Normalized RT
28
Pink Noise
Log frequency
(1/number of trials)
Figure 3. Comparison of component-dominant and interaction-dominant dynamics
and the pink (1/f) noise in word naming data expected if the latter dynamics govern
lexical performance.
The criticality metaphor requires that the concept of a delicate or precise
balance be defined for lexical activity. In simulations employing recurrent lexical entries where each node feeds back to itself, McNellis and Blumstein (2001)
elected to embody the critical state in the relation among (a) resting level of
lexical activation, (b) positive (excitatory) feedback dependent on previous activation of the node, and (c) negative (inhibitory) feedback dependent on time
since the node’s initial activation. Subcritical and supracritical states of the lexicon follow, respectively, from lowering and raising the resting level of lexical
activation. McNellis and Blumstein (2001) suggested that the latter distinction,
framed in terms of base activation level, might facilitate understanding of the
word-perception impairments of Broca’s aphasia (interpreted as subcritical)
and Wernicke’s aphasia (interpreted as supracritical).
Physical Metaphors
A major conceptual consequence of pursuing the metaphor is schematized
in Figure 13 (top left). Conventionally, one thinks ‘horizontally’ about the data
of a typical experiment on lexical decision or rapid naming. That is, one thinks
in terms of a number of horizontally arrayed components whose internal dynamics, when integrated, account for the observed lexical performance. We
can refer to this convention as component-dominant dynamics because the intrinsic activities of the components are held to be much more influential, much
more dominant in determining the observed performance, than the interactions among the components (Van Orden & Holden, 2002). In Figure 13 (top
left), the idea of component-dominant dynamics is envisaged in its most basic
form as links in a causal chain — as a linear (or horizontal) sequence of encapsulated devices, each a source of efficient cause.
The metaphor of self-organized criticality promotes consideration of an
alternative dynamics and an alternative way of thinking about the data of experiments on lexical activity. The alternative dynamics is interaction-dominant
dynamics (Jensen, 1998; Van Orden, Holden, & Turvey, 2003). The alternative
way of thinking is vertical.
In the dynamics of a system that has self-organized to a critical state the
intrinsic dynamics of the components matter less than the mutual interaction
among components. These mutual interactions occur at multiple embedding
time scales, as Figure 13 (top right) highlights. The causal structure of interaction-dominant dynamics is less like the links of a chain and more like the legs
of a table (it provides the support for the critical state).
The vertical thinking is that lexical activity is an event that nests processes at faster time scales and, in turn, is nested within processes at slower time
scales. Consider rapid naming of a visually presented word. Faster than the
pace of naming trials are neuromotor and vascular processes necessitated by
coordinating the articulators. Slower than the pace of naming trial are the fluctuations in motivation and vigilance across the duration of the experimental
session. The latter examples of nested and nesting time scales are merely two
among very many (Van Orden, Moreno, & Holden, 2003).
The interdependence of temporally nested processes is required of the delicately balanced critical state: the behavior of any one process at any one time
scale must be susceptible to, and reflective of, the behaviors of all processes
at all time scales. The primary hypothesis of lexical engagement that follows
from the interdependence in interaction-dominated dynamics is that the time
series of word responses (e.g., naming) should exhibit scaling of the kind seen
in power laws. More specifically, it should exhibit the scaling relation between
the power spectrum for pronunciation latency and frequency schematized in
29
30
M. T. Turvey and Miguel A. Moreno
Figure 13 (bottom). That is, it should exhibit 1/f noise (pronounced one over
“ef ” noise) or pink noise — a superposition of signals of all durations. In the
contemporary literature, the issues encompassed by Figure 13 are vigorously
debated (Thornton & Gilden, 2005; Van Orden et al., 2003, 2005; Wagenmakers, Farrell, & Ratcliff, 2005).
7.
Postscript: Why entertain physical metaphors for the mental lexicon?
In the present article we have situated the concept of the mental lexicon within
a variety of physical perspectives. The perspectives encompass two concerns
high on the agenda of contemporary science: identifying the principles of selforganizing complex systems and understanding the scope and meaning of the
measurement and entanglement problems. Our goal was to provide investigators of the mental lexicon with physically motivated metaphors for thinking
about the acquisition and deployment of word knowledge. The promissory
note is that the metaphors will open the door on new methods, principles, and
formalisms. The more definite and probable expectation is that the metaphors
will increase appreciation for the mental lexicon as an aspect of human nature
deserving of the most advanced conceptions available to science.
Acknowledgements
Preparation of this manuscript was supported by National Institute of Child Health and
Human Development Grant HD–01994 awarded to the Haskins Laboratories. Correspondence concerning this article should be addressed to M. T. Turvey, Haskins Laboratories,
300 George Street, New Haven, CT 06510; e-mail michael.turvey@uconn.edu.
References
Aczel, P. (1988). Non-well-founded sets. Palo Alto, CA: CSLI Publications.
Aerts, D. & Czachor, M. (2004). Quantum aspects of semantic analysis and symbolic artificial intelligence. Journal of Physics A: Mathematical and General, 37 (12), L123-L132.
Aerts, D., & Gabora, L. (2005a). A theory of concepts and their combinations — I — The
structure of the sets of contexts and properties. Kybernetes, 34 (1–2), 167–191.
Aerts, D., & Gabora, L. (2005b). A theory of concepts and their combinations — II — A
Hilbert space and representation. Kybernetes, 34 (1–2). 192–221.
Physical Metaphors
Bak, P. (1996). How nature works: The science of self-organized criticality. New York: Springer
Verlag.
Bak, P., & Chen, K. (1991). Self-organized criticality. Scientific American, 264, 46–53.
Barwise, J., & Etchemendy, J. (1987). The liar: An essay on truth and circularity. New York;
Oxford University Press.
Bates, E., & Goodman, J. C. (1997). On the inseparability of grammar and the lexicon: evidence from acquisition, aphasia, and real-time processing. Language and Cognitive Processes, 12, 507–584.
Birkhoff, G., & von Neumann, J. (1936). The logic of quantum mechanics. Annals of Mathematics, 37, 823–843.
Browman, C., & Goldstein, L. (1995). Dynamics and articulatory phonology. In R. Port &
T. van Gelder (Eds.), Mind as motion: Explorations in the dynamics of cognition (pp.
175–194). Cambridge, MA: MIT Press.
Cassirer, E. (1950). The problem of knowledge: Philosophy, science, and history since Hegel.
New Haven, CT: Yale University Press
Décary, M., & Lapalme, G. (1990). An editor for the explanatory and combinatory dictionary of contemporary French (DECFC). Computational Linguistics, 16 (3). 145–154.
Einstein, A., & Infeld, L. (1938/1966). Evolution of physics: The growth of ideas from early
concepts to relativity and quanta. New York: Simon & Schuster Inc.
Elman, J. L. (2005). Connectionist models of cognitive development: Where next? Trends in
Cognitive Sciences, 9, 112–117.
Elman, J. L. (2004). An alternative view of the mental lexicon. Trends in Cognitive Sciences,
8, 301–306.
Elman, J. L. (1995). Language as a dynamical system. In R. Port & T. van Gelder (Eds.),
Mind as motion: Explorations in the dynamics of cognition (pp. 195–225). Cambridge,
MA: MIT Press.
Feynman, R. P., Leighton, R. B., & Sands, M. (1972). The Feynman lectures on physics. Reading MA: Addison-Wesley.
Gabora, L., & Aerts, D. (2002). Contextualizing concepts using a mathematical generalization of the quantum formalism. Journal of Experimental & Theoretical Artificial Intelligence, 14, 327–358.
Goodwin, B. (1994). How the leopard changed its spots: The evolution of complexity. New
York: Charles Scribner’s Sons.
Iberall, A. S. (1983). What is ‘language’ that can facilitate the flow of information. Journal of
Theoretical Biology, 102, 347–359.
Iberall, A. S., & Soodak, H. (1987). A physics for complex systems. In F. E. Yates (Ed.), Selforganizing systems: The emergence of order (pp. 499–520). New York: Plenum Press.
Iberall, A. S., & Soodak, H., & Hassler, F. (1978). A field and circuit thermodynamics for
integrative physiology II. Power and communicational spectroscopy in biology. American Journal of Physiology, 234, R3-R19.
Jensen, H. J. (1998). Self-organized criticality: Emergent complex behavior in physical and
biological systems. Cambridge: Cambridge University Press.
Juarerro, A. (1999). Dynamics in action: Intentional behavior as a complex system. Cambridge: MIT Press.
3
32
M. T. Turvey and Miguel A. Moreno
Kant, I. (2000). The critique of judgement. (J. H. Bernard, Trans.). Amherst, N.Y.: Prometheus
Books. (Original work published 1790).
Kercel, S. W. (2003). Engogenous cause, bizarre effects. Evolution and Cognition, 8, 130–
144.
Kleene, S. C. (1950). Introduction to Metamathematics. Princeton, N.J.: D. van Nostrand
Company.
Kugler, P. N., & Turvey, M. T. (1987). Information, natural law and the self-assembly of rhythmic movement. Hillsdale, N.J.: Earlbaum Associates.
Kugler, P. N., & Turvey, M. T. (1988). Self-organization, flow fields, and information. Human
Movement Science, 7, 97–129.
Landauer, T. K. (2002). On the computational basis of learning and cognition: Arguments
from LSA. In B. H. Ross (Ed.), The psychology of learning and motivation (pp. 43–84).
New York: Academic Press.
Landauer, T. K., & Dumais, S. T. (1997). A solution to Plato’s problem: The latent semantic
analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104, 211–240.
Langacker, R. W. (1987). Foundations of cognitive grammar: Theoretical perspectives, Volume
1. Stanford, CA: Stanford University Press.
Lindley, D. (1996). Where does the weirdness go? Why quantum mechanics is strange, but not
as strange as you think. New York: Basic Books.
Lund, K., & Burgess, C. (1996). Producing high-dimensional semantic spaces from lexical
co-occurrence. Behavioral Research Methods, Instruments, and Computers, 28, 203–
208.
Masataka, N. (1994). Effects of experience with live insects on the development of fear of
snakes in squirrel monkeys, Saimiri sciureus. Animal Behaviour, 46, 741–746.
McNellis, M. G., & Blumstein, S. E. (2001). Self-organizing dynamics of lexical access in
normals and aphasics. Journal of Cognitive Neuroscience, 13, 151–170.
Rosen, R. (1987). Some epistemological issues in physics and biology. In B. J. Hiley & F. David Peat (Eds.), Quantum implications: Essays in honor of David Bohm. (pp. 314–327).
London: Routledge & Kegan Paul.
Rosen, R. (1991). Life itself: A comprehensive inquiry into the nature, origin, and fabrication
of life. New York: Columbia University Press.
Rosen, R. (2000). Essays on life itself. New York: Columbia University Press.
Schütze, H. (1998). Automatic word sense discrimination. Computational Linguistics, 24,
97–124.
Sethna, J. P., Dahmen, K. A., & Myers, C. R. (2001). Crackling noise. Nature, 410, 242–250.
Shinbrot, T., & Muzzio, F. J. (2001). Noise to order. Nature, 410, 251–258.
Thornton, T. L., & Gilden, D. L. (2005). Provenance of correlations in psychological data.
Psychonomic Bulletin & Review, 12, 409–441.
Ulanowicz, R. E. (1997). Ecology, the ascendent perspective. NY: Columbia University Press.
van Geert, P. (1995). Dimensions of change: A semantic and mathematical analysis of learning and development. Human Development, 38, 322–331.
von Neumann, J. (1966). Theory of Self-Reproducing Automata. Urbana IL: University of Illinois Press. Edited and completed by Arthur W. Burks.
Physical Metaphors
Van Orden, G. C., & Holden, J. G. (2002). Intentional contents and self-control. Ecological
Psychology, 14, 87–109.
Van Orden, G. C., Holden, J. G., & Turvey, M. T. (2003). Self-organization of cognitive performance. Journal of Experimental Psychology: General, 132, 331–350.
Van Orden, G., Holden, J. G., & Turvey, M. T. (2005). Human cognition and 1/f scaling.
Journal of Experimental Psychology: General, 134, 117–123.
Van Orden, G. C., Moreno, M. A., & Holden, J. G. (2003). A proper metaphysics for cognitive performance. Nonlinear Dynamics, Psychology, and Life Sciences, 7, 49–60.
Wagenmakers, E.-J., Farrell, S. & Ratcliff, R. (2005). Human cognition and a pile of sand: A
discussion on serial correlations and self-organized criticality. Journal of Experimental
Psychology: General, 134, 108–116.
Wallman, J. (1979). A minimal visual restriction experiment: Preventing chicks from seeing their feet affects later responses to mealworms. Developmental Psychobiology, 12,
391–397.
Widdows, D. (2003). Geometry and meaning. Chicago, IL: Chicago University Press.
Widdows, D., & Peters, S. (2003). Word vectors and quantum logic: Experiments with negation and disjunction. In R. T. Oehrle & J. Rogers (Eds.), 8th Proceedings of Mathematics
of Language (pp. 141–154). Bloomington, IN: Indiana University.
Winfree, W. (1987). When time breaks down: The three-dimensional dynamics of electrochemical waves and cardiac arrhythmias. Princeton, NJ: Princeton University Press.
33
Purchase answer to see full
attachment