<< 1 >>
Rating:  Summary: Wonderful Treatise on Cache Coherence Review: This book on cache coherence and shared memory multi-processor systems should be on every OS developer's desk; in fact, it should be well worn and marked up with lots of highlighting, underlining, and phrases of "Oh so THAT's how it works!" written in the margins.The authors do a wonderful job describing the principles of cache coherence and the difference between message passing (or distributed) systems and shared memory systems. The rest of the book, of course, is spent on the latter, and the authors delve into such topics as: memory latency (and how to reduce/hide latency), NUMA and COMA architectures (and different interconnect networks), memory prefetch, memory bandwidth, various cache consistency models, and a lot of examples of various applications and the cache invalidation patterns those applications exhibit. And that's all just in the first 3 chapters of the book! The book describes the architectures of several of the scalable shared memory systems that existed in the mid-90's, and then it goes on to describe a system called DASH that was implemented by the authors and folks at Stanford. At first I thought I was going to be put off by the focus on DASH, but it actually had the opposite effect. The chapters on DASH did a great job of going through all the details and clearly showing me how all this works "in practice." I'm a software guy, and this book was recommended to me by a hardware guy, and I think it's a must for anyone doing software development for large complex multi-processor systems. The writing is very clear and straight-forward, though it's not something I can read while the television is on (in other words, I've got to concentrate while reading this book). Not only would this book be useful as a college CS Architecture textbook, but it's proving to be highly useful in the workplace!
<< 1 >>
|