<< 1 >>
Rating: Summary: The missing ingredient for true artificial intelligence Review: A fascinating book with many implications for the fields of artifical intelligence and human-computer interaction. Picard provides a rich background on modern research in emotion and puts forth compelling arguments for the need to incorporate affective abilities in computers as, perhaps, the only way to allow them to respond intelligently to their environment and make rational decisions. An entertaining and mind-opening read.
Rating: Summary: The missing ingredient for true artificial intelligence Review: A fascinating book with many implications for the fields of artifical intelligence and human-computer interaction. Picard provides a rich background on modern research in emotion and puts forth compelling arguments for the need to incorporate affective abilities in computers as, perhaps, the only way to allow them to respond intelligently to their environment and make rational decisions. An entertaining and mind-opening read.
Rating: Summary: Why I wrote a book about giving computers emotional skills Review: I never thought I would write a book on emotion. Being a woman who is an engineer, a computer scientist, and a professor at MIT, has provided extra incentive to avoid anything that might stereotype me as an ``emotional female.''Yet this book is about giving emotional abilities to certain kinds of computers. This may sound outlandish, and you may wonder if I have not lost a wariness of emotions and their association with poor judgment and irrational behavior. I have not. Computers certainly do not need poor judgment and irrational behavior. One of the things I attempt to show in this book, however, is that in completely avoiding emotion, computer designers may actually lead computers toward these undesirable goals. The role of emotions in ``being emotional'' is a small part of their story. The rest is largely untold, and I think has profound consequences. I wrote this book to compile the findings (from neuroscience, cognitive science, psychology, and more) that led me to believe that emotion was a key ingredient of human intelligence and of intelligent interaction. I also present my own ideas for what it means for a computer to have skills of emotional intelligence. But I do not think we should run out and give machines emotions (and I define what I mean in the book by "giving them emotions," including aspects of this that will differ significantly from human emotions.) Some uses of this technology are potentially very disturbing, and I include a chapter discussing ethical concerns. In Part II of the book, where I talk about how to build affective computers, I tried to cover both my group's work and the work of others as much as possible, to illustrate what is already doable vs. what is still science fiction. Well, let me not take too much of your time here; I hope you will learn from the book, and feel free to share your comments. Rosalind Picard
Rating: Summary: Religious Artificial Intelligence?! Say what?!! Review: In pop-culture there is the usual dichotomy between someone who "thinks with their head" and one who "thinks with their heart". In fact, the more enlightened thinkers realize that this dichotomy is at least half false: people who are emotionally-impaired are not more rational than the rest of us, but are rather quite crippled and incapable of facing everyday life. (Though if you are not already convinced of this, this book will do little to persuade you.)
At the time when I first read this book, nearly a year ago now upon the recommendation of a friend, I was already convinced of the usefulness of emotions in AI, and was hoping to find some real concrete and useful results here regarding AI-emotion, which I could then apply to the design and construction of an AI which would presumably be of use to someone in the real world. Much to my dismay, there are NO such applications listed; Picard suggests that AIs should be given emotions, but doesn't bother to give any real applications in which these emotions would be useful, or what kind of emotions they should be given, or even what an "emotion" is for an AI!
The book discusses almost exclusively the problem of AIs, not having emotions, but understanding the emotions of humans. Sounds great, how about some applications? Picard then proceeds to suggest the most absurd applications imaginable; here are a few of them:
-Emotive Markup Language: Modify the hardware of a keyboard such that the computer can tell how much pressure was applied on each keystroke. Then have the machine interpret these pressure levels as "happy typing", "angry typing", etc., and then mark each portion of text appropriately, with say, big red bold letters for "angrily typed" word, and so on.
-The understanding user interface: The user interface receives occasional feedback from the user, (blood pressure levels, questionnaire, whatever) from which it is to judge the user's mood, such as anger or frustration, and then try to help the user out somehow if the user is becoming frustrated. Little does Picard realize that most users find a clairvoyant-wannabe computer more annoying than helpful.
-Intelligent Answering Machines: Our answering machine receives a phone call, and presumably by talking to the individual on the other line, gathers some information as to the phone-call's content. Meanwhile the answering machine is monitoring the emotional-state of its master, and if it infers that its master is in a mood that can be interrupted, and that the phone-call is of interest to its master, then the answering machine will tell its master that there is a call waiting, otherwise it will just take a message.
If those are the most important problems facing an AI-researcher today, then the problem of AI must already be quite solved! In fact, in the past year I have been further enlightened, and have realized that AIs in fact don't need emotions: just because humans need them is no argument at all that AIs need them! It is foolhardy to simply give AIs emotions without understanding WHY emotions evolved: we would just be copying superficial similarity; feathers aren't the key to flight! It turns out that emotions are evolution's own peculiar way of implementing probabilistic reasoning and goal-systems: every emotion can be translated into purely decision-theoretic terminology. For example, "curiosity" is a heuristic which can be replaced with a system which sets up experiments so as to maximize its expected information gain on each experiment.
Of course, Picard could not simply say as much: she hints several times throughout her work that she believes in god, and that she intends her AIs to appreciate god as well. For example, we have the following quote:
"A system that truly operates in a complex and unpredictable environment will need more than laws; it will essentially need values and principles, a moral compass for guidance, and perhaps even religion." (page 134)
Funny, I seem to be doing quite fine without religion!
Overall and like most works of pure-philosophy, this book is intellectually quite sparse: Picard says more or less everything she has to say in the first 50 pages, but then somehow manages to drag her book out for another 200 pages by mentioning various things only tangentially related to the topic under discussion and rephrasing what she has already said. This short review alone contains a good deal more content than do a dozen pages from this book.
Rating: Summary: Wearable computers can respond intelligently to your mood Review: Most of this book is a primer for non-clinicians on what is meant by 'human emotions', and how a computer in physical contact with someone could identify that person's mood and respond appropriately to it. Picard makes her case that 'emotional intelligence' would be a useful attribute for software. A human who loses the ability to feel emotions becomes, not admirably logical like Mr. Spock, but unable to make quick, simple, arbitrary decisions and prone to repeat mistakes. Just like most software today. Picard relates the use of affective computing primarily to the 'wearable computers' that researchers at MIT have been playing with for over 10 years to do mostly trivial functions like take photographs and generate muzak. There wasn't much here for those of us who have to interact through keyboards/mice and monitors, and surprisingly no attempt to connect affective computing with related techniques such as fuzzy logic. There is an excellent source reference list at the back.
Rating: Summary: Important ideas Review: Rosalind Picard's book shouldn't have broken new ground, but it did. The ignoring of the role of emotion in computing is both appalling and typical. Picard begins to rectify this "oversight" ("Whoops, I forgot humans have feelings!") in a fascinating and useful book.
Rating: Summary: Important ideas Review: Rosalind Picard's book shouldn't have broken new ground, but it did. The ignoring of the role of emotion in computing is both appalling and typical. Picard begins to rectify this "oversight" ("Whoops, I forgot humans have feelings!") in a fascinating and useful book.
Rating: Summary: Interesting book - Very interesting area. Review: This is an interesting book, and I strongly agree with Picard's assertion that computers ought to be able to "recognize" and respond to human emotions. She does an excellent job of making and supporting this point. The other part of her thesis, that computers themselves should have "emotions" is much less clear. She never seemed to adequately make the case that a computer with its own emotions would be of any significant value for anything, and frankly I can't think of any useful applications for such an ability. Some sort of emotional component may be needed to fully support and achieve AI (and she makes this point) but in terms of sort of the standard user interface types of applications it's hard to imagine how such a capability could be useful. Anyway, good book on a very interesting topic.
Rating: Summary: Interesting book - Very interesting area. Review: This is an interesting book, and I strongly agree with Picard's assertion that computers ought to be able to "recognize" and respond to human emotions. She does an excellent job of making and supporting this point. The other part of her thesis, that computers themselves should have "emotions" is much less clear. She never seemed to adequately make the case that a computer with its own emotions would be of any significant value for anything, and frankly I can't think of any useful applications for such an ability. Some sort of emotional component may be needed to fully support and achieve AI (and she makes this point) but in terms of sort of the standard user interface types of applications it's hard to imagine how such a capability could be useful. Anyway, good book on a very interesting topic.
<< 1 >>
|