Rating:  Summary: Web Testing Handbook Review: This book is a must have for the IT professional. Whether you develop, audit, test or administer systems, you will find the information in this book very informative and useful. The book is logically laid out, with useful case studies, relevant checklists, and proven testing techniques. I have attended seminars by the author, and have found that the book is an indispensible addition. I've gone through the book twice and continue to find useful techniques. Definitely recommended!
Rating:  Summary: Testing plan alone worth the cost Review: This book is about web testing in general, not just performance testing, and is a must have for the professional testing engineer. Chapters 7 and 8, on performance and scalability give a very good introduction to the subject, and include a great sample performance testing plan. Michael Czeiszperger Web Performance, Inc. Stress Testing Software http://www.webperformanceinc.com
Rating:  Summary: Testing plan alone worth the cost Review: This book is about web testing in general, not just performance testing, and is a must have for the professional testing engineer. Chapters 7 and 8, on performance and scalability give a very good introduction to the subject, and include a great sample performance testing plan. Michael Czeiszperger Web Performance, Inc. Stress Testing Software http://www.webperformanceinc.com
Rating:  Summary: Not worth the [$$] pricetag Review: When looking into books for any tech-related topic, I look for two qualities to assess the value of the book. The first is the depth of the subject matter of the work. I look for books that teach me new technologies, technique, or process. The second is the book's lasting value as a reference for future work. When spending money, I'd like to be sure that the lasting value of the book is at least potentially there. This book has neither of those qualities, here's why: - Depth of Subject Matter - It's difficult to determine who this book is written to educate. The forward identifies the audience as existing software testers looking for education in the finer points of web software testing. That's legitimate, but it falls far short of this or any other unstated goals. The delivery of material in this work is quick and dirty. There's no topic that extends beyond a single-digit number of pages. This makes plenty of sense in the early chapters where the discussion of things like hardware compatibility are discussed. Other areas deserve far better coverage. The topics of browser compatibility, performance testing, and scalability testing, for example, are scantly explained. It's a disservice to the reader, since these are paramount topics for the intended audience. Another downfall to this approach is its failure to discuss the organizational differences between an IT team deploying software frequently versus one deploying incremental releases on a yearly timeframe. To be fair, the authors touch on this topic, but it's nothing comprehensive. - Reference Value - The reference value of this book is almost zero. I run a test team for a web based business of considerable size, and I have to say I found some actually misleading advice in the work. A lot of the explanations of what's smart and what's avoidable fall completely off the mark. Even worse, and this is actually enough of a reason to start looking for a different book right away, is the poor quality of the references throughout the book. While they spend some considerable time explaining the difference between the time in a normal software development cycle and one that operates under 'web time', they cite sources from two and three years ago that are completely irrelevant considering the widespread and fundamental changes to the online software development domain. They establish 'web time' as an accelerated, hectic calendar where nothing is the same after two months of churning, but then cite references from 1999 market research studies to back up their points. Though definitely not intentional, it's very neglectful. I turned to the front of the book at one point to re-verify the copyright date. ... So, for me and for my needs, this book is essentially worthless and I'm sad to have spent [$$] to learn this. The topics are covered only as summaries, but those that deserve and in some cases completely require a much deeper explanation are treated similarly. Regarding the intended audience, it's still a head scratcher because of the delivery of the material. It's not heavy in any one area, so it's difficult to determine if this is for a QA manager (can't work, not enough attention to process), the new tester (can't work, not enough detail on the actual testing), the converting tester (might work, but the high-level descriptions coupled with the indescriminate delivery of the subjects would confuse anyone without due insight), or the experienced web tester (can't work, too much of the data is elementary to those already functioning as a tester in the web space). I don't suggest it, and I wouldn't suggest it in a future edition if they work to update the references.
Rating:  Summary: Not worth the [$$] pricetag Review: When looking into books for any tech-related topic, I look for two qualities to assess the value of the book. The first is the depth of the subject matter of the work. I look for books that teach me new technologies, technique, or process. The second is the book's lasting value as a reference for future work. When spending money, I'd like to be sure that the lasting value of the book is at least potentially there. This book has neither of those qualities, here's why: - Depth of Subject Matter - It's difficult to determine who this book is written to educate. The forward identifies the audience as existing software testers looking for education in the finer points of web software testing. That's legitimate, but it falls far short of this or any other unstated goals. The delivery of material in this work is quick and dirty. There's no topic that extends beyond a single-digit number of pages. This makes plenty of sense in the early chapters where the discussion of things like hardware compatibility are discussed. Other areas deserve far better coverage. The topics of browser compatibility, performance testing, and scalability testing, for example, are scantly explained. It's a disservice to the reader, since these are paramount topics for the intended audience. Another downfall to this approach is its failure to discuss the organizational differences between an IT team deploying software frequently versus one deploying incremental releases on a yearly timeframe. To be fair, the authors touch on this topic, but it's nothing comprehensive. - Reference Value - The reference value of this book is almost zero. I run a test team for a web based business of considerable size, and I have to say I found some actually misleading advice in the work. A lot of the explanations of what's smart and what's avoidable fall completely off the mark. Even worse, and this is actually enough of a reason to start looking for a different book right away, is the poor quality of the references throughout the book. While they spend some considerable time explaining the difference between the time in a normal software development cycle and one that operates under 'web time', they cite sources from two and three years ago that are completely irrelevant considering the widespread and fundamental changes to the online software development domain. They establish 'web time' as an accelerated, hectic calendar where nothing is the same after two months of churning, but then cite references from 1999 market research studies to back up their points. Though definitely not intentional, it's very neglectful. I turned to the front of the book at one point to re-verify the copyright date. ... So, for me and for my needs, this book is essentially worthless and I'm sad to have spent [$$] to learn this. The topics are covered only as summaries, but those that deserve and in some cases completely require a much deeper explanation are treated similarly. Regarding the intended audience, it's still a head scratcher because of the delivery of the material. It's not heavy in any one area, so it's difficult to determine if this is for a QA manager (can't work, not enough attention to process), the new tester (can't work, not enough detail on the actual testing), the converting tester (might work, but the high-level descriptions coupled with the indescriminate delivery of the subjects would confuse anyone without due insight), or the experienced web tester (can't work, too much of the data is elementary to those already functioning as a tester in the web space). I don't suggest it, and I wouldn't suggest it in a future edition if they work to update the references.
|