Artificial Inteligence in John Searle’s paper: Minds, Brains, and Programs

1961 Words4 Pages

Searle’s paper, “Minds, Brains, and Programs”, was originally published in Behavioral and Brain Sciences in 1980. It has become one of modern philosophy’s (and broadly, cognitive science’s) most disputed and discussed pieces due to the nature of the argument presented in the paper. In said paper, John Searle sought, or should I say, seeks, to dispute the claim that artificial intelligence in the form of computers and programs do, or at the most basic level, could (one day), think for their synthetic selves; essentially it’s a refutation of the idea that computers or programs can actually “understand” in the same way that a human can. This argument is formulated around two distinct claims: (1) Intentionality in humans is a product of causal features of the brain, i.e., minds are a product of brains. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality, i.e., just because a computer or program is given the resources to understand, does not mean that the computer or program can actually ever understand. Before we examine these claims further, it’s important to illustrate exactly what type of artificial intelligence Searle is addressing. There are two different types of AI, one being “weak AI” and the other being “strong AI”. The layman understanding of weak AI, or “cautious AI” as some call it, is that the computer or program is simply used as a tool, something that can facilitate the human mind in a more powerful way. This essentially boils weak AI down to a resource for simulating mental abilities - useful for people in fields of psychology or medicine where processes like hypothesis testing are important, which is something a computer can simulate better than a human. It is important to... ... middle of paper ... ... robot that perform its various motor functions. Still, Searle argues, the user still understands nothing beyond the scope of symbol manipulation. Running through the program prevents the occurrence of any mental state of a meaningful type. Searle also argues in this case, that there is a tacit concession of the argument for strong AI: the Robot Reply suggests that cognition and understanding in computers is not solely a matter of manipulating symbols; contrary to what strong AI actually supposes. The tacit concession is a result of adding a set of causal relations to the outside world (http://www.iep.utm.edu/chineser/#SH2b). http://dictionary.reference.com/browse/turing+test?s=ts pg 417 Minds, Brains, and Programs. Searle, John. http://psych.utoronto.ca/users/reingold/courses/ai/turing.html http://www.iep.utm.edu/chineser/ http://www.iep.utm.edu/chineser/#SH2b

More about Artificial Inteligence in John Searle’s paper: Minds, Brains, and Programs

Open Document