The requirement that an epistemology be learnable is both basic and essential, since in specifying conditions for claims to count as reliable knowledge, a theory of knowledge implicitly embodies a theory of the powers of the mind, of which an empirically plausible theory of learning is a necessary and important part Churchland, P.
Hence, a clear distinction was drawn between observational and theoretical knowledge statements. Following this development, a method was devised whereby theoretical inferences that were not themselves directly justified by observation could be considered as admissible knowledge statements.
The solution was to have a framework in which inferential knowledge statements would be secured by induction from empirical statements. This approach to the meaning of theoretical terms was also thought to be appropriate for dealing with the unobservable posits of science, such as quarks, curved space-time, centres of gravity, and so on, which are at scales that are either far below, or are otherwise invisible to, what can be observed during the course of our everyday interactions with the world.
This innovation meant that instead of trying to justify knowledge claims by deducing them directly from empirical foundations, with the help of the new mathematical logic, deductive relations could work the other way around Russell, , pp. In other words, observations statements would be deduced from the statements of a theory under specified experimental conditions, and if the observations subsequently made then matched up with predictions, then the theory in question could be taken as confirmed.
This approach was known as the hypothetico-deductive mode of justification, or the deductive-nomological theory of explanation Hempel, , pp. Finally, a theory of meaning - essential for distinguishing genuine scientific statements from pseudoscientific ones - was derived from this conception, and was known as the verification theory of meaning. As such, value claims were deemed to be empirically unverifiable, and therefore no more than subjectively motivated theoretical claims, unknowable either directly or derivatively Ayer, Nevertheless, logical empiricism was fraught with complex philosophical and technical problems.
Not least of these is the problem of induction, which was first recognised by Hume, and later in the context of logical empiricism, by Popper. However, in terms of phenomenalist foundationalism there is nothing in the way of evidence that would count as support for the principle of induction itself. To sustain the inference that some principle of induction holds requires the assumption of an additional premise, such that nature is uniform, or that the future is similar to the past in certain ways. Thus, to justify belief in some principle of induction beyond what is provided by past and present observations requires the circular and invalid assumption of such a principle, since it cannot justifiably follow from the sort of epistemology provided by foundationalism.
Popper attacked the belief that hypotheses could be confirmed or verified as unscientific and too simplistic, for it is always possible to find confirming instances of any theory, if confirmations are all that are sought. For Popper, theories that are properly scientific are conceivably refutable, that is, they are not the result of confirming observations, but rather are tentative conjectures or proposals that arise from an existing and uncertain frame of reference, or framework of expectations and interests, which to the extent that they survive critical empirical tests, take us forward to a better understanding, or theory, of reality.
On this view, the growth of knowledge proceeds not by accumulating instances of confirmation, but by justifying knowledge in a more indirect way, through a process of conjecture and refutation, where falsified theories are replaced by new and hopefully better conjectures that meet further tests with greater success, and so on. Popper called this method by which a solution to a problem is approached the method of trial and error , and in avoiding the problem of induction, his theory of knowledge shows itself to be synonymous with a general theory of learning, which has implications for how an epistemology can itself come to be known or justified.
Drawing on work by Duhem Duhem, , pp. In other words, every hypothesis is accompanied by a number of auxiliary hypotheses, or assumptions, and that when tested any one of these could be false. Thus, there is no such thing as a crucial or absolutely conclusive once-and-for-all falsifying experimental result, as a test for any hypothesis is relative to the background assumptions involved. Hence, the empirical consequences that follow from the testing of a hypothesis are consequences of the whole theoretical network that supports the hypothesis in question. Another complexity was the realisation that observation statements on their own tell us very little about the empirical world, as they too are always embedded in a much wider network of statements, many of which have no direct connection with our senses.
Thus, it is whole theories that are the basic units of meaning, and this is referred to as the network theory of meaning Churchland, P. Consequently, as theoretical wholes, all observations are theory-laden. This implies that what we observe is not privileged as a source of knowledge, in the sense that it is incorrigible and immune from revision. Hence, observations cannot be the absolute foundation of science, and of reliable knowledge. We cannot, therefore, appeal to empirical adequacy as the sole criterion of epistemological adequacy, as claimed by logical empiricism.
Thus, there is no reliable or certain a priori source of knowledge, or first philosophy, which functions as some Archimedean point outside of science, from which scientific theories can be pronounced as acceptable. Rather, our knowledge of the world is made up of a richly interconnected whole, or seamless web, of theoretical statements. One consequence of this view, is that science is self-conscious common sense, and that when we come to alter our theories in the light of experience, we use our best existing scientific knowledge to assist us with the process of revision or replacement.
Thus, we use our best existing science to bootstrap our way to better theories that are more comprehensive, powerful, elegant, and simple, etc. Churchland, P. If the idea of having an indubitable foundation for knowledge is untenable, and if varieties of relativism are equally so - if for no other reason than that the issue of knowledge justification either lapses entirely, or is so weakened that little epistemic value remains, for a general discounting of justification makes problematic the question of why some theories are much better than others in solving problems, or are better at making predictions, or fulfilling expectations, and so on - consideration of some form of coherence theory of knowledge appears unavoidable, and indeed possible Williams, , p.
These virtues entail considerations of simplicity, consistency, conservatism, comprehensiveness, fecundity, explanatory unity, refutability, and learnability, which collectively constitute features of coherence justification see Quine and Ullian, , pp. The value of these particular virtues may be outlined briefly.
Conservatism is important because the less rejection there is of knowledge that we have sound reason to accept, the more plausible the hypothesis in question, all things being equal. Comprehensiveness or generality acquires its virtue from explanatory breadth, that is, by explaining more rather than less phenomena, and in this respect is closely related to fecundity, which measures the range of phenomena that a theory can account for. Comprehensiveness is also related to explanatory unity, for theories that bring an underlying conceptual link or commonality to the understanding and solution of a problem, do better at generalising this knowledge from past experience to new cases in the future.
Simplicity or economy functions by requiring the least explanatory apparatus to do the job of accounting for the widest range of phenomena possible, and in this regard, it has a close relationship with the virtue of comprehensiveness. Refutability is a virtue because without it a theory cannot be said to predict or explain anything. Its value is measured by the cost of retaining a theory in the face of falsifying evidence.
The virtue of learnability requires that theories cohere with our best scientific accounts of human cognition and how we are able to acquire knowledge in the first place, and that these accounts are not inconsistent with other reliable bodies of knowledge that go to make up our global scientific world view. In this regard, the virtue of consistency can be viewed as being the key to coherence see Quine and Ullian, , pp. The superempirical virtues are therefore a measure of the global excellence of a theory, and are relevant to an estimate of its comparative advantages and disadvantages over other contenders.
Furthermore, the strategies and criteria that the brain uses for recognising and organising information, that is, for sifting out noise from meaningful information, rests on values such as simplicity, coherence, and explanatory power. On this view, theories cannot be measured against each other in any absolute sense — it is only possible to compare the relative merits, or respective global virtues, of competing accounts, so that a judgement can be made that one theory is better than , or more coherent than , another.
In practice, this is a difficult and complicated matter, however the following set of rules for theory preference can be used as a mechanism for facilitating the selection of the best from a number of competing explanatory theories:. If T 1 and T 2 are competing theories in need of comparative evaluation, and all other things are equal, we should prefer T 1 to T 2 if:. T 1 is more readily testable than T 2. T 1 leaves fewer messy unanswered questions behind than T 2.
T 1 squares better with what we already have reason to believe than does T 2 Lycan, Since an epistemology is itself a set of knowledge claims, our understanding of it, and of science itself, are therefore corrigible, and questions as to how it is that we can come to acquire knowledge and to revise our convictions are — to the extent that human beings are counted as part of the physical universe — at bottom empirical questions about the natural world.
Without a first philosophy or secure foundation for knowledge, an epistemology must therefore embody the most powerful and sophisticated theories of learning and knowledge acquisition that our best sciences provide, for justifying and explaining in a self-referential way, how scientific knowledge is possible. Hence, in specifying the conditions for knowledge justification, an epistemology implicitly embodies a theory of mind Evers and Lakomski, , pp.
On this view, epistemology becomes naturalised, and falls into place within the wider fabric of our scientific knowledge, as a chapter of psychology. Consequently, there is a reciprocal containment of epistemology in natural science, and of natural science in epistemology Quine, , pp. In specifying conditions for knowledge claims to count as justified, an empirically plausible theory of perception, learning, memory, representation, and cognition is essential.
In the case of foundational epistemologies on which empiricist conceptions of knowledge and science have been based, the processes of learning and perception were presumed to occur via the receipt of sensory impressions, and cognition was assumed to be a matter of the logical manipulation of these impressions Evers, , p. Thus, rationality could be represented as a set of formal rules for the addition, deletion, and manipulation of belief statements. In such a conceptualisation, the ultimate virtue of a theory rested in truth. S and Churchland, P.
Recent advances in fields such as computational neuroscience, cognitive neurobiology, and connectionist artificial intelligence have generated novel understandings of the fundamental principles of brain structure and function, and along with these developments, revisions to our conventional theories of knowledge. The nature of the discoveries in these fields are such that work in the philosophy of science is now no longer able to proceed without their input, for new insights into the principles of brain representation and computation have consequences for the whole enterprise of epistemology itself.
This reformulation in our understanding should not be too surprising, for it has occurred before in the history of philosophy.
Indeed, the growth and evolution of knowledge itself has driven this transformation Hacking, For instance, in the seventeenth century ideas were seen to be the objects that linked the Cartesian ego the internal world of subjective experience with res extensa the outside world , and as can be seen with the development of theories of knowledge, these have since been replaced with the sentence as the thing that represents reality in a body of knowledge. In other words, explicit knowledge is seen to be codifiable , or expressible in some symbolic representational form, such as the spoken or written word.
The sentential view of knowledge has been so influential in twentieth century epistemology that most researchers in the field of artificial intelligence AI have modelled their computational programs on the assumption that the administration of intelligent behaviour consists of the manipulation of a sequence of symbols according to a set of rules. Hence, on this view, human intelligence, adaptation, and learning consist of appropriate changes or updates to our store of symbolic representations, or beliefs, as a function of experience. These include failure to emulate realistically the cognitive and behavioural skills of humans, and other non-linguistic animals, in effortlessly recognising and responding to patterns embedded in complex and noisy stimulus fields; and the brittleness and inflexibility that AI systems manifest in coping satisfactorily with imperfect, partial, or ambiguous information.
Furthermore, classical AI systems have been unable to accommodate the subtlety and complexity of context-dependent knowledge, which in effect has limited them to very restricted and narrow domains of application Bereiter, , pp. These difficulties are not simply a reflection of the great complexity and scale of the task. Rather, they stem from conceptual and methodological considerations. The co-evolution of the research disciplines that now inform the brain sciences has been such that cognitive science can now be said to possess a presumptive understanding of how the brain works.
This includes an understanding of how the brain represents and processes information about the general features of the world, of how fleeting information about the here and now, and time and space is represented and processed, of how complex but coherent motor behaviour is generated, and of how the brain can modulate its own cognitive activities as a fluid and changing function of current interests and salient background information.
However, of most significance, the new cognitive and brain sciences now furnish a coherent account of what it is for the brain to have and deploy a conceptual framework in the ongoing business of perceptual recognition and the guidance of practical behaviour Churchland, P. Connectionist models of the mind-brain are of philosophical and scientific interest because they make no use of the familiar sentential framework of cognition, and no use of the familiar framework of deductive and inductive inference. The fleeting features of the world are therefore represented by neuronal patterns of activation or excitation, which tend to fall into one or other of the categories that the brain has acquired from experience.
Hence, the theories, or knowledge that the brain acquires about the world in this way, are entirely sub -linguistic or sub -symbolic Smolensky, , pp. This naturalistic view of knowledge has a number of important features, which shed light on a number of issues of concern to the theory and practice of knowledge management. Secondly, these representations, or prototypical categories, can be activated by inputs that incorporate only a small part of the presumptive information that they embody. Such vector completion provides the brain with a capacity for perceptual closure , where it fills in and completes information that is missing in sensory input from the world.
The brain is also able to perform this task swiftly as information storage and processing are not separated, as is the case with digital computers. Massively interconnected parallel distributed processing PDP systems are content-addressable , and are therefore able to gain very rapid access to the total store of information embodied in a representation, even if the input pattern is distorted or only a partial fragment Churchland, P. While having adaptive advantages and providing a basis for anticipation, prediction, and speculation about events in the world, this ampliative capacity of the brain also has a drawback, for in carrying substantial interpretive and predictive content representations are therefore fallible , and are subject to empirical criticism and correction.
In this regard, prototype activation is akin to an inductive argument, in that there is more information contained in the conclusion than in all the antecedent premises combined. Hence, unlike deductive arguments, in which the truth of the premises makes a false conclusion impossible, the possibility for error is ever present in the epistemological nexus between reality and inductive representation see Giere, , pp.
However, this feature of prototype representations is consistent what is known about learning and the growth of knowledge generally Popper, Thirdly, learning occurs when the brain acquires - through repeated exposure to varied instances of relevant environmental stimuli, and a steady calibration of its myriad synaptic connections through the feedback of error - a representation about something in the world, which when activated by a relevantly similar input produces an appropriately finely-tuned response.
The test for a brain educated in this way is whether it can respond correctly to some new set of relevant inputs, which are similar in their general statistical properties, or features Crick, , p. Thus, intelligence may be construed as being more than merely a matter of responding appropriately to a changing environment. Rather, an intelligent system, be it an individual, social, or organizational entity, is one that is capable of exploiting information and energy in a way that increases the information it embodies, and possibly the internal physical ordering and organization that it has, in relation to its environment.
Hence, on this view, learning turns out to be an essential feature of intelligence see Churchland, P. It therefore follows from this that intelligent organizations must also be organizations that learn. From an epistemological point of view, cognition therefore consists of the activation of recurrent pattern processing vectors, which enable the brain to recognise some situation, which may be otherwise partial, unfamiliar, ambiguous, puzzling, novel, or problematic in some way, as an instance of something that is well represented by an existing prototype, and its associated categories.
Activation entails completing input vectors that are incomplete or partial, and in the process imposing some structural order on the content of incoming information. In triggering more information than is present in the input alone, prototype activation enables the brain to construct an anticipatory and speculative hypothesis , and to make some sort of adaptive sense of the case at hand in its particular environmental context, and to predict aspects of the situation that are not yet perceived, so that it can respond accordingly see Churchland, P.
The brain's capacity for massively parallel distributed processing permits acquired prototype activation vectors that have some complex of relational or structural features in common to cluster together or unite in the same region of some high-dimensional and abstract representational space to form a prototypical hot spot , which makes the brain extremely sensitive to similarities along all relevant stimulus dimensions.
Hence, the virtues of simplicity and conceptual unity play an important and related role in any adequate epistemology, for they facilitate superior generalisation by generating the simplest possible hypotheses about what structures might lie hidden in, or behind, various input vectors Churchland, P. From this perspective, the unit of knowledge, and of understanding, is something that is not represented in the brain in the form of an explicit set of codifiable symbols, such as a set of sentences, statements, or propositions about the world.
On the prototype model outlined here, knowing or comprehending something consists of having a grasp of certain paradigmatic kinds of situations and processes, and of possible variations thereof. Hence, acquiring knowledge and understanding in some domain entails becoming familiar with various contextual states and causal processes, which together constitute the features identified by the relevant learned prototype. On this view, the evaluation of knowledge is therefore not a matter of logical consistency with observation sentences, or inductive inference or confirmation therefrom, as demanded by logical empiricism.
Rather, the virtue of a theory in prototypical form rests in the many uses to which it is put.
Thus, as a collection of perceptual, explanatory, manipulative, and other associated abilities embodied in the synaptic configurations of the brain, evaluation becomes a pragmatic matter, rather than a purely logical or formal one see Churchland, P. Consequently, how any given theory is evaluated will depend on the context of its application, the aims and interests of the cognitive agents concerned, and the kinds of solutions that are thought to be valuable, useful, or plausible to the case at hand, which together boil down to an overall goodness-of-fit in satisfying a complex set of soft constraints see Rumelhart, , pp.
Since there are a range of dimensions along which individuals are bound to differ in any given instance, evaluation will necessarily entail a complex process of assessment and negotiation, in which the superempirical virtues will unavoidably play a crucial guiding role in settling on the best global account of the situation in question.
One of the insights of the new cognitive science is that powerful non-symbolic and distributed representations in the form of appropriately trained sensory-motor neural maps in the brain, underlie much human expertise, knowledge, and judgement. On this view, symbolic or codified forms of knowledge, such as language, become a rather superficial and conventional representation of a way of understanding in some problematic context. In this respect, linguistic or symbolic formulations of knowledge reduce the richness of experience into more compact and sparse forms of representation.
Hence, the linguistic representations of valid law-like generalisations in a scientific theory can be seen to function as compression algorithms , which economically condense vast amounts of information into a single symbolic formula, or a collection of such formulae Evers and Lakomski, , p. Thus, symbols are parsimonious semantic representations of one or more kinematically and dynamically richer general prototypes that occur in the brain.
As such, they are well suited as mediums of exchange in complex institutional contexts, which depend on external and public representations for sharing, extending, and enhancing theoretical and practical capacities Churchland, P. However, as representations of, and as guides to practice, the value of compression algorithms diminish where varied and complex contextual factors predominate see Evers and Lakomski, , p.
Furthermore, in its spoken and written forms language constitutes a form of extrasomatic memory, through which the collective and accumulated learning of a culture can be effectively passed on from one generation to another Churchland, P. Language also reduces the complexity of conceptual structure by pulling together many concepts under one symbol, making it possible to establish increasingly complex concepts, and to use them to think at levels of abstraction that would otherwise be impossible Damasio, A.
Thus, linguistic representations may be said to constitute human knowledge in an objective, independent, and collective sense, with which the knowing subject interfaces Hacking, , p. Consequently, human understanding resides primarily and originally within the brain, and therefore an adequate account of this reality is a prerequisite for sustaining a coherent account of knowledge management concepts and practices. Since the body and its sensory machinery is an indispensable frame of reference for mind and cognition, the mind is therefore not just embrained, but is also profoundly embodied Damasio, A.
Furthermore, since cognitive representations of any kind are known to induce corresponding physiological responses in the organism that has them the well-known galvanic skin response GSR is an example of this , thereby creating a sense of the biological self-concept for the creature, feelings associated with subjectivity and emotion therefore constitute an integral component of the machinery of cognition and reason. These feelings qualify our perceptions, modify our comprehensions of the world, and are therefore just as cognitive as any other perceptual or cognitive image or experience.
Embodied cognition therefore is crucial to both practical reasoning , and the normalcy of explicit and declarative knowledge in making real life choices and decisions. Hence, having appropriate feelings may be essential for the skillful application of prototypical concepts and theories in complex social and practical situations see Churchland, P. Thus, embodied cognition functions to assign different values to the decision options that individuals - embedded in the reality of their physical and social contexts - actually make.
The perspective on mind, cognition, and knowledge afforded by the new cognitive science suggests rather that the opposite relationship would be more appropriate Damasio, A. Because embodied prototype activation vectors are responsive to specific stimulus profiles consisting of real-valued elements and features of high-dimensionality, which are computed by transforming activation vectors through a series of massively parallel soft constraints to a correspondingly finely-tuned output, cognition and behaviour is intrinsically situated, or context-dependent see Brown, Collins, and Duguid, , pp.
This property of cognition, along with embodied emotions and feelings, confers distinct advantages to organisms that have evolved in contexts where existence is often precarious and the demands for adaptation and survival beckon relentlessly. Success in responding to these demands has required intelligent adaptive organisms, such as humans, to draw in parallel not only on acquired experiential knowledge of a situation, but also on the wider context of acquired social and cultural knowledge see LeDoux, However, social and cultural knowledge is not just of an explicit declarative kind, as in language and other sociocultural symbols, but is also distributed and manifested in the artefacts, technologies, and arrangements of the surrounding physical and institutional environment, which includes the embodied and embedded brains of other human beings see, for example, Hutchins, Thus, human knowledge is physically and socially distributed in nature.
Just as humans are situated cogitators and actors they are also, therefore, situated learners. Hence, the history of our learning as a species, and the development of our cultures in the form of institutions, technologies, and practices, etc. Thus, as an accumulated set of solutions to existential problems, a culture functions through organised learning and socialisation to shape and structure, in various subtle and context-dependent ways, the internal representational framework, and related patterns of responses, which identifies particular individuals and social groups as members of that culture, and that characterises the ways in which they live their social lives.
In this light, culture in the widest sense, as it is lived and constructed at a particular historical place and time, can be conceived of as a set of characteristic behavioural and material dispositions, or physically encoded patterns of knowledge, which are embodied in the central nervous systems of the members of a given social group Evers, , p. The parallel and distributed nature of representation and computation in the nervous system, and its extension and distribution throughout the body and out into the features of the surrounding world, implies a view of cognition and culture that is intimately interwoven and related.
Psyche versus the mind. Essays On Aristotle's De Anima. Oxford, UK: Clarendon Press; — Lapointe FH. Who originated the term 'Psychology'? J Hist Behav Sci. Jolley N. The Light of the Soul. Theories of ideas in Leibniz, Malebranche, and Descartes. Sebba G. Bibliographia Cartesiana. A Critical Guide to the Descartes Literature The Hague, the Netherlands: Martinus Nijhoff. Chappell V. Twenty-Five Years of Descartes Scholarship A Bibliography.
Radner D. Descartes' notion of the union of mind and body. J Hist Philosophy. Shapiro L. Banks EC. Neutral monism reconsidered. Philos Psychol. Regenmortel MHVvan. Promises and Limits of Reductionism in the Biomedical Sciences. Horst S. Beyond Reductionism. Gillet C. Physicalism and Its Discontents. Churchland PS. Toward a Unified Science of the Mind Brain. Ryle G.
The Concept of Mind. Dokic J. Paris, France: Seuil; — Ramsey WM. Representation Reconsidered. Dickerson AB. Kant on Representation and Objectivity. Simmons A. In: Nolan E, ed. The Cambridge Descartes Lexicon. Ott W. What is Locke's Theory of Representation? Br J Hist Philosophy. Teller P. Representation in Science. In: Curd M, Psillos S, eds. The Routledge Companion to Philosophy of Science.
London, UK: Routledge. Smith R. Representation of mind: C. Sherrington and scientific opinion c. Sci Context. Greco A. The concept of representation in psychology.
Cognitive Syst. Concept of representation and mental symptoms. Psychopathology; ; 42 4 — Berrios GE. The construction of hallucinations. Hallucinations: Research and Practice. Berlin Germany: Springer; — Eckardt B von. Mental representation. Cottingham J. The role of God in Descartes's philosophy. A Companion to Descartes. Oxford, UK: Blackwell; — Wilson BR. In: Jones L, ed. Encyclopaedia of Religion. Farmington Hills, Ml: Thomson Gale; — Vollmer G. How is it that we can know this world? New arguments in evolutionary epistemology. In: Hosle V, lilies C, eds. O'Hear A. Evolutionary epistemology.
In: Brinkworth M, Weinert F, eds. Evolution 2. Glimcher PW. Decisions, Uncertainty, and the Brain. Brown TM. Cartesian dualism and psychosomatics. Locke, Metaphysical dualism and property dualism.
Anstey PR. John Locke and Natural Philosophy. Damasio A. Descartes' Error. Ventriglio A. Descartes' dogma and damage to Western psychiatry. Epidemiol Psychiatr Sci. Hahn L. Descartes ou des quartes. In: Dechambre A, ed. Dictionnaire Encyclopedique Sci Med. Vol Paris, France: Masson. Beretta A. Joseph Priestley: an instructive eighteenth-century perspective on the mind-body problem. Brain, Mind and Consciousness in the History of Neuroscience. Berlin, Germany: Springer. Suzuki A. Dualism and the transformation of psychiatric language in the 17th and 18th centuries.
Hist Sci. Hist Psychiatry. Zijlstra CP. The Rebirth of Descartes. Groningen: University of Groningen; ; [ Google Scholar ]. Goldstein J. Swain G. L'aliene entre le medecin et le philosophe. Perspect Psychiatr. Le Sujet de la Folie: Naissance de la Psychiatrie. Paris, France: Privat. Scepticism and epistemology. Holzhey H. Historical Dictionary of Kant and Kantianism. Nichols R. Thomas Reid's Theory of Perception.
Sorley WR. A History of English Philosophy. Cambridge, UK: University Press. Stirling JH. The Secret of Hegel, 2 vols. Mander WJ. British idealism. A History. Heidegger M. Zoilikon Seminars. Boss M, ed. Wickens AP. A History of the Brain. Lanteri-Laura G. Hollander B. The Mental Symptoms of Brain Disease. Smith CUM. Berlin, Germany: Springer; [ Google Scholar ]. Flor-Henry P. Ictal and interictal psychiatric manifestations in epilepsy: specific or non-specific?
A critical review of some of the evidence. Greiffenstein M. Temporal lobe epilepsy and schizophrenia: comparison of reaction time deficits. J Abnorm Psychol.
Visual Hallucinations: history and context of current research. The Neuroscience of Visual Hallucinations. UK: Wiley; — Baillarger JGF. De la Nature des hallucinations. In: Recherches sur les Maladies Mentales. Vol 1. Biological and quantitative issues in neuropsychiatry. Behav Neurol. Martin MJ. Psychosomatic medicine.
A brief history. Tuke DH.
This book collects forty of Tversky's articles, selected by him in collaboration with the editor during the last months of Tversky's life. It is divided into three sections: Similarity, Judgment, and Preferences. Selected Writings A Bradford Book. Preference, Belief, and Similarity. Selected Writings by Amos Tversky edited by Eldar Shafir. A Bradford Book. The MIT Press. Cambridge, Massachusetts.
Philadelphia, PA: Henry C. Lea; [ Google Scholar ]. Nemiah JC. A psychodynamic view of psychosomatic medicine.
Psychosom Med. Alexander F. Psychosomatic Medicine. New York, NY: Norton. Lewis A. Psychol Med. Fountopoulos A.
Paris, France: Gamber; [ Google Scholar ]. Morgan CL. The law of psychogenesis. Dide M. Introduction a I'Etude de la Psychogenese. Paris, France: Masson; [ Google Scholar ]. Jung CG. Cossa P. Organogenese ou psychogenese des troubles mentales. Bull Acad Natl Med. Birnbaum K. Der Aufbau der Psychose. In: Wimmer A. Psychogenic Psychoses. Schioldann J, trans-ed. Adelaide, Australia: Adelaide Academic Press; — Sommer R.
Diagnostik der Geisteskrankheiten. Vienna, Austria: Urban und Schwarzenberg. Dimitriadis Y. Paris, France: L'Harmattan. The factors on insanity and J Hughlings Jackson. Jacyna LS. Process and progress: John Hughlings Jackson's philosophy of science.