<< 1 >>
Rating: Summary: The mind as a hybrid of neural net and symbol processor Review: The Algebraic Mind is a technical analysis of what kind of computational device it would take to act like a human mind. What are the building blocks of the mind, and how can they be implemented in a brain? Interweaving the lessons of the two traditions of cognitive science (symbol processing and connectionist networks), Gary Marcus concludes that connectionist networks are the right approach, but that current designs are not adequate. In particular, Marcus shows the limitations of back propagation algorothms and of multilayer perceptron networks that have no initial structure and must learn everything from experience. This, he points out in the preface, has led others in the field to mistakenly assume he is anti-connectionist in general. This reveals the originality of his proposal. Rather than abandoning connectionism, Marcus proposes an original compromise, a growth path to a new kind of connnectionist network, one that can also act like a symbol processor. For example, back propagation and similar learning algorithms used in current neural networks (multilayer perceptron models using multiple nodes to represent a variable) simply do not allow these networks to generalize abstract relations freely from experience the way biological brains are able to do in certain circumstances. Marcus argues tht such free generalization is essential to human thought, yet a serious problem for current networks. Another limitation of current networks is in robustly representing complex relations between bits of knowledge. A third key limitation of current neural net models identified by Marcus is that they are generally not able to keep track of individuals separately from kinds. Marcus explores how these limitations of current connectionist networks affect a variety of real problems, such as linguistic inflection, language learning, object permanence, and object tracking. Some might ask, "are these problems really so relevant to our understanding of ourselves?" Although these questions may seem technical, they are of vital interest if we are ever to actually build mechanical computational devices that emulate the human mind as well as the "Positronic Brain" on the Starship Enterprise. As the produces of Star Trek would have it (by way of the late science fiction author Isaac Asimov) the mind is most realistically modelled as some sort of "neural network" which distinguishes it from a "traditional digital computer." But what's the difference? At first, the distinction seems obvious. "Neural networks" in their current form are the result of a revolution in cognitive science known as PDP architectures, or parallel distributed processing. This architecture allows computation to occur in a highly distributed way among many parallel streams at the same time. This allows for a lot more different activity to be happening at the same time than an archictecture that forces logic to be performed in one (or a few) central places. If the operations of thought are highly parallelized in a mind, then this seems to provide a more efficient way to emulate it than a few programs running on a few CPUs. This explains why we might imagine "neural networks" to be faster and more powerful at emulating the kinds of things a mind does than the way current desktop computers operate. Is this why, even in our science fiction, we think of "neural networks" as so much more likely to produce minds. Is it simply because they are faster? Is it because we are bored with serial computers and imagine that we need something more exotic to model the mind? What are the real issues that distinguish what a "neural network" can do from what a "traditional digital computer" can do? Can they do fundamentally different things? What would a neural network have to be able to do in order to act like a mind? In-depth technical analyses of these important and fascinating questions (especially the last) are at the core of Gary Marcus' "The Algebraic Mind." Although it is technical cognitive science, it will repay the effort for anyone seriously interested in models of the human mind. This book reveals the boundaries of our current knowledge of the mind by exploring the limits of the best approaches available and offering a path forward.
Rating: Summary: Symbol systems and neural substrates Review: This book tackles one of the most important (and poorly understood) questions in cognitive science: What is the form of human mental representation? Notice that this is nothing short of asking, what's in a mind such that it should think? ..., this question is not a simple one, it has by no means been answered, and there is raging debate in cognitive science as to what its answer is. ... Traditional connectionist networks that learn by back-propagation or related algorithms do not implement symbol systems in the classical sense. That is, they do not perform computations as the execution of explicit abstract rules over an alphabet of symbolic primitives and recursively specified combinations of these symbols, and they do not variablize values. For most connectionist models, this is entirely intentional. Traditional connectionists seek explicitly to build networks that are not symbol systems because they believe that minds just don't work that way, and the evidence they cite is the success of their intentionally sub-symbolic models. In fact, this opinion is the prevailing one in the field of cognitive modeling. What Marcus (and others) is arguing, is that what is required is not the elimination of symbol systems as models of cognition, but rather models that seek to implement them on neurally realistic substrates (like models composed of simple processing units that operate in parallel). His argument is cogent, convincing and decidedly well informed. It is for this reason that this book is such an accomplishment.
<< 1 >>
|