<< 1 >>
Rating: Summary: cool book Review: brief book is very interesting... full of intersting obervations about human-computer interaction and information seeking via the web.
Rating: Summary: a big disappointment Review: Huberman's book is a remarkably novel way of looking at the Web. With simple and well developed examples, Huberman provides a clear description of hidden structures existing in Web. It's a book that is useful for both curious readers and researchers. Readers interested in understanding the use of information by society, including ways of searching, organizing, and interacting with large information systems must read this book.
Rating: Summary: cool book Review: In this book, the author introduces results of research that shows there are surprising strong global regularities present on the web resulting from the local browsing behavior of agents. He explains in simple terms and well-chosen familiar examples, key ideas to understand how these regularities come about. The ideas and the regularities described in every chapter are backed by refereed papers from the author and his associates that have appeared over the years (in Science, Nature, ...) and that I would recommend the technically inclined reader to look into. As the author takes the reader through the different chapters, he introduces in simple terms the methodology of study and analysis borrowed from the physical sciences (to study the dynamics of large number of interacting particles) which in my case it was very helpful as I am trained in computer science where we do not get exposed to those techniques. The regularities are explained by way of interesting models (e.g., social dilemmas, six-degrees of separation, Brownian motion, etc.) that make for a refreshing reading.The author goes further than just presenting and explaining the results as he gives very practical applications where knowing these regularities can help the design of better algorithms, web sites and systems. Among some of the results presented are: a law that can predict how far users will go on clicking on pages of a given site, the existence of `internet storms' where the net becomes very slow even though there is no obvious event that caused it (like when sometimes in a highway you slowdown to a halt even though there does not seem to be any accident), a law that predicts the distribution of the sizes (in pages) of web sites and several other regularities. Among one of the clever applications described is an algorithm that figures out when to wait or request again for a web page so that the user on average downloads it faster.
Rating: Summary: A fresh perspective to understand the web Review: In this book, the author introduces results of research that shows there are surprising strong global regularities present on the web resulting from the local browsing behavior of agents. He explains in simple terms and well-chosen familiar examples, key ideas to understand how these regularities come about. The ideas and the regularities described in every chapter are backed by refereed papers from the author and his associates that have appeared over the years (in Science, Nature, ...) and that I would recommend the technically inclined reader to look into. As the author takes the reader through the different chapters, he introduces in simple terms the methodology of study and analysis borrowed from the physical sciences (to study the dynamics of large number of interacting particles) which in my case it was very helpful as I am trained in computer science where we do not get exposed to those techniques. The regularities are explained by way of interesting models (e.g., social dilemmas, six-degrees of separation, Brownian motion, etc.) that make for a refreshing reading. The author goes further than just presenting and explaining the results as he gives very practical applications where knowing these regularities can help the design of better algorithms, web sites and systems. Among some of the results presented are: a law that can predict how far users will go on clicking on pages of a given site, the existence of 'internet storms' where the net becomes very slow even though there is no obvious event that caused it (like when sometimes in a highway you slowdown to a halt even though there does not seem to be any accident), a law that predicts the distribution of the sizes (in pages) of web sites and several other regularities. Among one of the clever applications described is an algorithm that figures out when to wait or request again for a web page so that the user on average downloads it faster.
Rating: Summary: too many clicks to nowhere Review: The title promises much. One had hoped that with so few pages a concise outline would be the product. Alas no. One has 95 pages of vaguery, allusion to supposed meaningful research which is never explained, and trite examinations of the substantial observations that have been borowed from other authors. His reference to the power law does not result in anything applicable to understanding the Web. His reference "tragedy of the commons" a la Peter Senge, suggests he undestands neither the metaphor nor its relationship to the Web or the information that exists there. Unfortunately this takes up one of the five pages of anything containing potential substance. The discussion of nodes begins vaguely and ends with no law. Another page down. The power law suggests an upper level of tolerance, but in its lack of conclusion loses another page. Social dilemma leaves the reader with the abiding question: So? With the final page ostensibly dealing with a critical number of clicks the reader is left to infer that reading this book is too many clicks (pages turned) and with the end we are left with no code, no guidelines, no greater understanding of the growth of the Web, and appreciation that while the reader is left no wiser, at least the book was short. There is great pretension here, but no delivery.
<< 1 >>
|