So where does that leave us? We argue about whether the brain is like a computer because we want to know how minds came to be; we want to understand what allows some arrangements of matter, but not others, not only to exist but to experience. Dan Falk is a science journalist based in Toronto.
A solution to P vs NP could unlock countless computational problems—or keep them forever out of reach. The US government is starting a generation-long battle against the threat next-generation computers pose to encryption.
Discover special offers, top stories, upcoming events, and more. Thank you for submitting your email! It looks like something went wrong. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service technologyreview. Skip to Content.
FOR: Sure it can! The actual structure is beside the point. Certain assumptions are made about their nature. In our example, the books have titles composed of known characters, allowing for alphabetization; the shelf has an ordering beginning to end, or left to right ; the books are objects that can fit onto the shelf and be moved about; and so on.
You, as the sorter, need know nothing about the full nature of a book in order to execute the algorithm — you need only have knowledge of shelf positions, titles, and how titles are ordered relative to one another. This abstraction is useful because the objects involved in the algorithm can easily be represented by symbols that describe only these relevant properties.
These two forms of abstraction are at the core of what enables the execution of procedures on a computer. At the level of its basic operations, a computer is both extremely fast and exceedingly stupid, meaning that the type of task it can perform in which the what is the same as the how is very simple. For a computer to perform the selection sort algorithm, for example, it would have to be described in terms of much simpler primitive steps than the version offered here.
Any complex procedure that a computer performs must be reduced to the primitive operations that a computer can execute, which may require many levels at which the procedure is broken down into simpler and still simpler steps. Imagine that you have a computer with three useful abilities: it has a large number of memory slots in which you can store numbers; you can tell it to move existing numbers from one slot to another; and it can compare the numbers in any two slots, telling you which is greater.
To do so, you must be able to represent the problem in terms that the computer can understand — but the computer only knows what numbers and memory slots are, not titles or shelves. The solution is to recognize that there is a correspondence between the objects that the computer understands and the relevant properties of the objects involved in the algorithm: for example, numbers and titles both have a definite order.
You can use the concepts that the computer understands to symbolize the concepts of your problem: assign each letter to a number so that they will sort in the same way 1 for A, 26 for Z , and write a title as a list of letters represented by numbers; the shelf is in turn represented by a list of titles. If you do this correctly, the computer can execute your algorithm by performing a series of arithmetical operations.
The physical computer can thus solve problems in the limited sense that we imbue what it does with a meaning that represents our problem. It is worth dwelling for a moment on the dualistic nature of this symbolism.
Symbolic systems have two sides: the abstract concepts of the symbols themselves, and an instantiation of those symbols in a physical object. This dualism means that symbolic systems and their physical instantiations are separable in two important and mirrored ways. First, a physical object is independent of the symbols it represents: Any object that represents one set of symbols can also represent countless other symbols.
A physical object and a symbolic system are only meaningfully related to each other through a particular encoding scheme. Thus it is only partially correct to say that a computer performs arithmetic calculations. As a physical object, the computer does no such thing — no more than a ball performs physics calculations when you drop it. It is only when we consider the computer through the symbolic system of arithmetic, and the way we have encoded it in the computer, that we can say it performs arithmetic.
Second, a symbolic system is independent of its representation, so it can be encoded in many different ways. Again, this means not just that it is independent of any particular representation, but of any particular method of representation — much as an audio recording can exist in any number of formats LP, CD, MP3, etc. This is a crucial property of algorithms and programs — another way of stating that an algorithm specifies what should be done, but not necessarily how to do it.
This separation of what and how allows for a division of knowledge and labor that is essential to modern computing. Black boxes pervade every aspect of computer design because they employ three distinct abstractions, each offering tremendous advantages for programmers and users.
The final abstraction of modular programming is perhaps its greatest advantage: the how can be changed without affecting the what. This allows the programmer to conceive of new ways to increase the efficiency of the program without changing its input-output behavior.
More importantly, it allows for the same program to be executed on a wide variety of different machines. Most modern computer processors offer the same set of instructions that have been used by processors for decades, but execute them in such a dramatically different way that they are performed millions of times faster than they were in the past.
Suppose that your employer has specified what you should do, but not how — in other words, suppose he is concerned only with transforming the start state of the shelf to a desired end state.
You might sort the shelf a number of different ways — selection sort is just one option, and not always a very good one, since it is exceedingly slow to perform for a large number of books.
You might instead decide to sort the books a different way: first pick a book at random, and then move all the books that alphabetically precede it to its left, and all the books that alphabetically follow to its right; then sort each of the two smaller sections of books in the same way. Or, as suggested, you might pay a friend to sort the books — then potentially you would not even know how the sorting was performed. Or you could hire several friends, and assign to each of them one of the simpler parts of the task; you would then have been responsible for taking a complex task and breaking it into more simple tasks, but you would not have been responsible for how the simpler tasks themselves were performed.
Black box programming creates hierarchies of tasks in this way. Each level of the hierarchy typically corresponds to a differing degree of complexity in the instructions it uses. Computers, then, have engineered layers of abstraction, each deriving its capabilities from joining together simpler instructions at a lower layer of abstraction.
But each layer uses its own distinct concepts, and each layer is causally closed — meaning that it is possible to understand the behavior of one layer without recourse to the behavior of a higher or lower layer. For instance, think about your home or office computer.
It has many abstraction layers, typically including from highest to lowest : the user interface, a high-level programming language, a machine language running on the processor, the processor microarchitecture, Boolean logic gates, and transistors.
Most computers will have many more layers than this, sitting between the ones listed. The use of layers of abstraction in the computer unifies several essential aspects of programming — symbolic representation, the divide-and-conquer approach of algorithms, and black box encapsulation. Each layer of a computer is designed to be separate and closed, but dependent upon some lower layer to execute its basic operations.
A higher level must be translated into a lower level in order to be executed, just as selection sort must be translated into lower-level instructions, which must be translated into instructions at a still lower level.
The hierarchy of a computer is not turtles all the way down: there is a lowest layer that is not translated into something lower, but instead is implemented physically. In modern computers this layer is composed of transistors, miniscule electronic switches with properties corresponding to basic Boolean logic.
As layers are translated into other layers, symbolic systems can thus be represented using other symbols, or using physical representations. The perceived hierarchy derives partially from the fact that one layer is represented physically, thus making its relationship to the physical computer the easiest to understand. Each is an equally correct way of interpreting what the computer does, as each imposes a distinct set of symbolic representations and properties onto the same physical computer, corresponding to two different layers of abstraction.
The executing computer cannot be said to be just ones and zeroes, or just a series of machine-level instructions, or just an arithmetic calculator, or just opening a file, because it is in fact a physical object that embodies the unity of all of these symbolic interpretations. Any description of the computer that is not solely physical must admit the equivalent significance of each layer of description. The concept of the computer thus seems to be based on a deep contradiction between dualism and unity.
A program is independent of the hardware that executes it; it could run just as well on many other pieces of hardware that work in very different ways. But a program is dependent on some physical representation in order to execute — and in any given computer, the seemingly independent layers do not just exist simultaneously, but are in fact identical, in that they are each equivalent ways of describing the same physical system.
More importantly, a description at a lower level may be practically impossible to translate back into an original higher-level description. Returning again to our sorting example, suppose now that a friend hires you to do some task that his boss asked him to perform.
Even if you are able to figure out that, say, you are also doing some kind of sort, it could be impossible to know whether you are sorting books rather than addresses or names. In the computer, then, a low-level description of a program does provide a causally closed description of its behavior, but it obscures the higher-level concepts originally used to create the program.
One may very likely, then, be unable to deduce the intended purpose and design of a program, or its internal structure, simply from its lower-level behavior. Since the inception of the AI project, the use of computer analogies to try to describe, understand, and replicate mental processes has led to their widespread abuse. Typically, an exponent of AI will not just use a computer metaphor to describe the mind, but will also assert that such a description is a sufficient understanding of the mind — indeed, that mental processes can be understood entirely in computational terms.
One of the most pervasive abuses has been the purely functional description of mental processes. In the black box view of programming, the internal processes that give rise to a behavior are irrelevant; only a full knowledge of the input-output behavior is necessary to completely understand a module.
Thus Turing said that a computer that passes the test would be regarded as thinking, not that it actually is thinking, or that passing the test constitutes thinking. This precept is based on a crucial misunderstanding of why computers work the way they do. The implicit idea of the Turing Test is that the mind is a program and a program can be described purely in terms of its input-output behavior.
To be sure, some programs can be defined by what output they return for a particular input. For example, our sorting program would always return a sorted shelf when given an unsorted shelf. However, many other computer programs cannot be described without referring to how they work. Given a program you did not create, attempting to completely explain its behavior without referring to its internal structure is exceedingly difficult and likely impossible unless the designer has provided you with its specification.
These suggestions reveal a troublingly low standard for our interactions with other beings, as the robots were created not so much to be social as to elicit socialized responses from humans. But more than believing that their mimicry makes them sufficient human companions, the makers of socialized robots often state that their creations actually possess human traits.
Suppose that the mind is in fact a computer program. If we have some computer program whose behavior can be completely described as if it were a black box, such a description does not mean that the box is empty, so to speak. The program must still contain some internal structures and properties. So even if we possessed a correct account of human mental processes in purely input-output terms which we do not , such an external description by definition could not describe first-person experience.
The Turing Test is not a definition of thinking, but an admission of ignorance — an admission that it is impossible to ever empirically verify the consciousness of any being but yourself. It is only ever possible to gain some level of confidence that another being is thinking or intelligent. So we are stuck measuring correlates of thinking and intelligence, and the Turing Test provides a standard for measuring one type of correlate.
Much artificial intelligence research has been based on the assumption that the mind has layers comparable to those of the computer. Under this assumption, the physical world, including the mind, is not merely understandable through sciences at increasing levels of complexity — physics, chemistry, biology, neurology, and psychology — but is actually organized into these levels.
Moreover, much work in AI has assumed that the layers of the mind and brain are separable from each other in the same manner that the computer is organized into many layers of abstraction, so that each layer can be understood on its own terms without recourse to the principles of lower levels. If this notion is true, then the processes that give rise to the mind must consist of some basic rules and procedures implemented in the brain.
The mind, then, is a program, and the brain is but a computer upon which the mind is running. In this understanding, the brain must contain some basic functional unit whose operations enable the implementation of the procedures of the mind.
For those AI researchers interested in actually replicating the human mind, the two guiding questions have thus been 1 What organizational layer of the mind embodies its program? But when closely examined, the history of their efforts is revealed to be a sort of regression, as the layer targeted for replication has moved lower and lower.
The earliest AI efforts aimed for the highest level, attempting to replicate the rules underlying reason itself. It should be noted that computers are actually very limited. All they can do is transform inputs into outputs. While any computing system can perform any computation assuming sufficient time and memory , computation alone does not enable the system to act. To do anything, a computer must interact with non-computer devices in its environment—typically, screens, keyboards, mice, printers, speakers, etc.
A robot is not just a computer; it is a computer attached to various sensors and actuators so that it can do things. Universally, computers are attached to other devices in order to do anything more than compute. The conscious mind is not computation. However, this does not mean that computation is not involved with consciousness. Rather, the conscious mind is better understood as something that interacts with the computer than the computer itself.
In particular, our conscious minds appear to be observing the outputs of our physical brains, acting as computers. What we experience as qualia are thus those inputs, manifested in our minds. If our minds take input from our brain-computers, do they also provide output? Certainly, we feel as though our minds control our brains and actions. But perhaps this an illusion. It can be argued that our minds are deluded into believing that they control the brain but are actually simply along for the ride.
However, we know by direct experience that we have conscious minds experiencing qualia. We also know that our brains engage in speech and writing about consciousness and qualia. It seems a highly implausible coincidence that our brains would talk about consciousness without receiving some sort of input from our consciousness. As such, they must be under some form of control by the conscious mind of the brain. We experience a limited degree of understanding and control over our own mind.
0コメント