Home :: Books :: Professional & Technical  

Arts & Photography
Audio CDs
Audiocassettes
Biographies & Memoirs
Business & Investing
Children's Books
Christianity
Comics & Graphic Novels
Computers & Internet
Cooking, Food & Wine
Entertainment
Gay & Lesbian
Health, Mind & Body
History
Home & Garden
Horror
Literature & Fiction
Mystery & Thrillers
Nonfiction
Outdoors & Nature
Parenting & Families
Professional & Technical

Reference
Religion & Spirituality
Romance
Science
Science Fiction & Fantasy
Sports
Teens
Travel
Women's Fiction
Neural Networks: A Comprehensive Foundation (2nd Edition)

Neural Networks: A Comprehensive Foundation (2nd Edition)

List Price: $117.00
Your Price: $117.00
Product Info Reviews

<< 1 2 >>

Rating: 5 stars
Summary: Great book
Review: I really like this book. It covers a lot of different material and does an excellent job of laying the foundations of the subject. The mathematical descriptions are a little hard to follow since they are terse but perusing them is well worth the effort. The book is a gem. I lost this book on a bus and bought another copy because i thought it was that good. It's a recommended text in a course here at cornell. Easily one of the most valuable books on my bookshelf.

Rating: 5 stars
Summary: Haykin's Neural Networks
Review: Read the other reviewers below for more details and various viewpoints. Here I'm assuming that you will hire a reputable consultant or tutor to either translate the book into ordinary English more or less or to teach you the mathematics behind it. Neural networks are important for everybody to understand because this is one of the important directions that computers and robotics are taking: learning things. As you move into this book, you'll discover that there are important categories that such learning machines fall into: learning with a teacher (that is, with some examples for the machine to learn from) or without a teacher (with no such examples), also called supervised versus unsupervised learning. There's also learning without or with feedback (including subtypes of feedforward networks with short-term memory, associative memory, and recurrent networks which use input-output mapping or relationships). Even high school and college students who wonder why they have to learn statistics and probability may be astonished to discover that some of the most effective learning machines involve statistics and probability. They fall into various categories such as maximum entropy (literally maximizing the entropy), maximum likelihood (again, the idea of maximizing likelihood as used in everyday language is a rough approximation, though the mathematical one is much more precise), minimizing the energy (Hopfield networks), minimizing mean square error (literally minimizing squares of statistical errors, though there is more to it), etc. In the last category mentioned fall (mostly) Kalman filter-predictors, which I worked on at the Defense Department in the 1980s. I'm currently more interested in the use of maximum entropy methods combined with the others, since my field of logic-based probability (LBP) is closely related to maximum entropy. See some of my other reviews for discussion of various of these topics, including statistics and probability. Haykin and Prentice Hall have done a very good job with this difficult and encyclopedic material.

Rating: 5 stars
Summary: Well suited for teachers and undergraduates...
Review: There aren't too many words to comment on the book. If you have strong mathematical analysis basics and you love Neural Networks then you have found your book. It was hard at the beginning thus I had to brush up my memories of mathematical analysis to have the "puzzle" slowly shape up. All the algorithms are introduced by clear and rigorous mathematical theory. I think it's well suited for teachers and undergraduates.

Rating: 2 stars
Summary: Not for programmers
Review: This book could be good for Electrical Engineers or Physicists interested in the field, but I really would not recommend it to researchers with background in Computer Science. The notations and everything are makes reading and following naturally harder for us.

Rating: 4 stars
Summary: A reliable Neural Network reference book
Review: This book is a recommended book for NN course. Many classmates regret to buy it initially as they find the book very unapproachable. I share the same view until one of my lecturers has painstakingly explained the concepts behind back-propagation and regularization. This book is one of the few where you can find the steps involved in implementing Optimal Brain Surgery. (Table 4.6). This book may discourage beginner.

Rating: 5 stars
Summary: Good detail with rigorous mathematics
Review: This book, excellent for self-study and for use as a textbook, covers a subject that has had enormous impact in science and technology. One can say with confidence that neural networks will increase in importance in the decades ahead, especially in the field of artificial intelligence. The book is a comprehensive overview, and does take some time to read and digest, but it is worth the effort, as there are many applications of neural networks and the author is detailed in his discussion.

In the first part of the book, the author introduces neural networks and modeling brain functions. A good overview of the modeling of neural networks and knowledge representation is given, along with a discussion of how they are used in artificial intelligence. Ideas from computational learning are introduced, as well as the important concept of the Vapnik-Chervonenkis (VC) dimension. The VC dimension is defined in this book in terms of the maximum number of training examples that a machine can learn without errors. The author shows it to be a useful parameter, and allows one to avoid the difficult problem of finding an exact formula for the growth function of a hypothesis space.

In the next part of the book, the author discusses learning machines that have a teacher. The single-layer perceptron is introduced and shown to have an error-correction learning algorithm that is convergent. There is a fine discussion of optimization techniques and Bayes classifiers in this part. The least-mean-square algorithm is generalized to the back-propagation algorithm in order to train multi-layer perceptrons along with a discussion on how to optimize its performance using heuristics. The author gives a detailed discussion of the limitations of back-propagation learning. In addition, the radial-basis function networks are introduced. Supervised learning is viewed as an ill-posed hypersurface reconstruction problem, which is then solved using regularization methods. Support vector machines are introduced as neural networks that arise from statistical learning theory considerations via the VC dimension. A summary is given of the differences between the different approaches in neural network learning machines. Committee machines, based on the divide and conquer algorithm, are also treated. Here the strategy is to divide the learning process into a number of experts, with the expectation that the collective efforts of these experts will more efficiently arrive at the solution.

The next part of the book introduces unsupervised learning machines. The ability of machines to discover useful information, such as patterns or features, in the input data is taken as an acid test for real intelligence. Hebbian learning via principal components analysis is discussed, along with competitive learning via self-organizing maps. The author uses computer simulations to illustrate the behavior of systems of neurons. Vector quantization is brought in as another supervised learning technique to fine tune the quality of the classifiers. Most interestingly, information-theoretic models are discussed with mutual information techniques used effectively as unsupervised learning algorithms. Some elementary but interesting examples of single neurons under the influence of noise are discussed in detail. The topic of Boltzmann machines is discussed also, and the physicists will find the treatment particularly fascinating, as it takes ideas from statistical mechanics and applies them to solve combinatorial optimization problems. Other more general statistical machines, such as Helmholtz machines, and mean-field theoretic approaches are discussed also. Reinforcement learning, using dynamic programming techniques is treated in detail.

The book ends with a treatment of nonlinear dynamical techniques to study the behavior of neural networks. The discussion makes use of many examples and computer experiments, and there are some good exercises at the end of the chapters for further analysis. Dynamical systems employing short-term memory and feedforward are discussed along with a treatment of stability in nonlinear dynamical systems. Feedback mechanisms are used to obtain input-output mapping. The definition of chaos is very weak, as it only employs the positivity of a Lyapunov exponent, but this is suitable for the purposes in the book.

The applications of neural networks are vast and space prevents here a comprehensive list. I have found them an excellent tool to study load balancing algorithms in distributed computing and networks, modeling of mobile communications, options pricing, and computational biology. There are also dozens of companies that specialize solely in neural network algorithms. No doubt as new learning algorithms are discovered and computers become faster, neural networks will play a major role in creating independent, autonomous, intelligent machines. This book will give the reader a solid understanding of how neural networks fit into the computational learning paradigm.

Rating: 5 stars
Summary: Excellent book!
Review: This is one of the most comprehensive texts on one of the most important topics in current computer science. Though it is a graduate level text and is exorbitantly expensive, this book covers what you need to know. This book does assume a fairly sophisticated mathematical background, but those willing to bone-up on their math can understand this book quite well. If you are looking for a book to explain neural networks in detail, look no further than here.

Rating: 5 stars
Summary: Neural networks, a bit theoretical
Review: While I did not read this reference entirely, I found it a useful complement to other references on artificial neural networks, in providing formal analysis on particular neural networks. The first chapter provides an excellent, albeit somewhat technical, introduction to artificial neural networks, and the reader with a background in physics, mathematics or engineering, may actually find this book, in conjunction with a workbook an excellent starting point for learning about neural networks.


<< 1 2 >>

© 2004, ReviewFocus or its affiliates