Home :: Books :: Professional & Technical  

Arts & Photography
Audio CDs
Audiocassettes
Biographies & Memoirs
Business & Investing
Children's Books
Christianity
Comics & Graphic Novels
Computers & Internet
Cooking, Food & Wine
Entertainment
Gay & Lesbian
Health, Mind & Body
History
Home & Garden
Horror
Literature & Fiction
Mystery & Thrillers
Nonfiction
Outdoors & Nature
Parenting & Families
Professional & Technical

Reference
Religion & Spirituality
Romance
Science
Science Fiction & Fantasy
Sports
Teens
Travel
Women's Fiction
Calculated Risks: How to Know When Numbers Deceive You

Calculated Risks: How to Know When Numbers Deceive You

List Price: $20.95
Your Price: $20.95
Product Info Reviews

<< 1 2 >>

Rating: 5 stars
Summary: How to interpret test results better than your Doc!
Review: This is a very clearly written book. It demonstrates many numerical errors the press, the public, and experts make in interpreting the accuracy of medical screening test (mammography, HIV test, etc...) and figuring out the probability of an accused person being guilty.

At the foundation of the above confusions lies the interpretation of Baye's rule. Taking one example on page 45 regarding breast cancer. Breast cancer affects 0.8% of women over 40. Mammography correctly interprets 90% of the positive tests (when women do have breast cancer) and 93% of the negative ones (when they don't have breast cancer). If you ask a doctor how accurate this test is if you get a positive test, the majority will tell you the test is 90% accurate or more. That is wrong. The author recommends using natural frequencies (instead of conditional probabilities) to accurately interpret Baye's rule. Thus, 8 out of every 1,000 women have breast cancer. Of these 8 women, 7 will have a positive mammogram (true positives). Of, the remaining 992 women who don't have breast cancer, 70 will have a positive mammogram (false positives). So, the accuracy of the test is 7/(7+70) = 10%. Wow, that is pretty different than the 90% that most doctors believe!

What to do? In the case of mammography, if you take a second test that turns positive, the accuracy would jump to 57% (not that much better than flipping a coin). It is only when taking a third test that also turns positive that you can be reasonably certain (93% accuracy) that you have breast cancer. So, what doctors should say is that a positive test really does not mean anything. And, it is only after the third consecutive positive test that you can be over 90% certain that you have breast cancer. Yet, most doctors convey this level of accuracy after the very first test!

What applies to breast cancer screening also applies to prostate cancer, HIV test, and other medical tests. In each case, the medical profession acts like the first positive test provides you with certainty that you have the disease or not. As a rule of thumb, you should get at least a second test and preferably a third one to increase its accuracy.

The author comes up with many other counterintuitive concepts. They are all associated with the fact that events are far more uncertain than the certainty that is conveyed to the public. For instance, DNA testing does not prove much. Ten people can share the same DNA pattern.

Another counterintuitive concepts is associated with risk reduction. Let's say you have a cancer that has a prevalence of 0.5% in the population (5 in 1,000). The press will invariably make promising headline that a given treatment reduces mortality by 20%. But, what does this really mean? It means that mortality will be reduced by 1 death (from 5 down to 4). The author states that the relative risk has decreased by 20%; but, the absolute risk has decreased by only 1 in 1,000. He feels strongly that both risks should be conveyed to the public.

The author shows how health agencies and researchers express benefits of treatments by mentioning reduction in relative risk. This leads the public to grossly overstate the benefits of such treatment. The author further indicates how various health authorities use either relative risk or absolute risk to either maximize or minimize the public's interpretation of a health risk. But, they rarely convey both; which is the only honest way to convey the data.

If you are interested in this subject, I strongly recommend: "The Psychology of Judgment and Decision Making" by Scott Plous. This is a fascinating book analyzing how we are less Cartesian than we think. A slew of human bias flaws our own judgment. Many of these deal with other application of Baye's rule.

Rating: 5 stars
Summary: A Work of Great importance
Review: This is an important work. It shows how to effectively reason about probabilities and risks and how to communicate them in a way that people can understand them. Many authors document the extent of statistical innumeracy among doctors and the general public. This book shows that this innumeracy is largely the result of ineffective forms of communicating probabilities and risks. The book has important implications about the teaching of statistics and should be read by all who want to improve their teaching of the subject. This book is also important for anybody who wants to improve their reasoning.

Rating: 3 stars
Summary: Miscalculations or Misinterpretations?
Review: This is perhaps the best book at simply explaining the statistics of risk and uncertainty I have run across. I have even used what the author calls the illusion of certainty in analyzing the highest and best use of real estate. This book shows how medical experts and criminologists can be misled, not so much by innumeracy, as by what might better be called an illusion of expertise. Experts in any field may find this book useful in view of the U.S. Supreme Court's Daubert Rule that expert courtroom testimony must follow the scientific method.
A couple of caveats are in order however, and they are, shall we say, doozies. Gigerenzer states that there is ample evidence that smoking causes lung cancer. But he fails to consider why people from Asian and Pacific-Island cultures have some of the highest smoking rates in the world, but some of the lowest cancer rates. Any why do longitudinal studies show that people from these same cultures have much higher rates of cancer once they migrate to modern countries? Is it diet, smoking, a combination of the two, or something else that "causes" cancer? Likewise, Gigerenzer states that there is strong evidence that secondhand smoke is harmful to health. But he fails to mention the cardinal rule of toxicology: the dose, or concentration of a substance, makes the poison, not the substance itself. It is only in modern energy efficient air-tight buildings that smoke can be sufficiently concentrated so as to become an irritant, let alone a perceived health hazard. Thus, it may not be secondhand smoke, but the environment of tight buildings that is the source of the problem.
Thus, Gigerenzer fails to point out that all statistics and numbers must be actively interpreted and are relative in meaning to the interpreter. This involves a social filtering process not discussed in the book. Also, government may legitimate some health and crime statistics, when they may be bogus. As an aficionado of Gigerenzer's books, maybe he will write a sequel on the interpretation, misinterpretation, and social and political construction of statistics.

Rating: 3 stars
Summary: Miscalculations or Misinterpretations?
Review: This is perhaps the best book at simply explaining the statistics of risk and uncertainty I have run across. I have even used what the author calls the illusion of certainty in analyzing the highest and best use of real estate. This book shows how medical experts and criminologists can be misled, not so much by innumeracy, as by what might better be called an illusion of expertise. Experts in any field may find this book useful in view of the U.S. Supreme Court's Daubert Rule that expert courtroom testimony must follow the scientific method.
A couple of caveats are in order however, and they are, shall we say, doozies. Gigerenzer states that there is ample evidence that smoking causes lung cancer. But he fails to consider why people from Asian and Pacific-Island cultures have some of the highest smoking rates in the world, but some of the lowest cancer rates. Any why do longitudinal studies show that people from these same cultures have much higher rates of cancer once they migrate to modern countries? Is it diet, smoking, a combination of the two, or something else that "causes" cancer? Likewise, Gigerenzer states that there is strong evidence that secondhand smoke is harmful to health. But he fails to mention the cardinal rule of toxicology: the dose, or concentration of a substance, makes the poison, not the substance itself. It is only in modern energy efficient air-tight buildings that smoke can be sufficiently concentrated so as to become an irritant, let alone a perceived health hazard. Thus, it may not be secondhand smoke, but the environment of tight buildings that is the source of the problem.
Thus, Gigerenzer fails to point out that all statistics and numbers must be actively interpreted and are relative in meaning to the interpreter. This involves a social filtering process not discussed in the book. Also, government may legitimate some health and crime statistics, when they may be bogus. As an aficionado of Gigerenzer's books, maybe he will write a sequel on the interpretation, misinterpretation, and social and political construction of statistics.


<< 1 2 >>

© 2004, ReviewFocus or its affiliates