<< 1 >>
Rating: Summary: Review by a user of the book and colleague of an author Review: First, I must admit a bias: I frequently work with one of the authors (Gelman), and I think highly of his work and statistical judgment. This book's biggest strength is its introduction of most of the important ideas in Bayesian statistics through well-chosen examples. These are examples are not contrived: many of them came up in research by the authors over the past several years. Most examples follow a logical progression that was probably used in the original research: a simple model is fit to data; then areas of model mis-fit are sought, and a revised model is used to address them. This brings up another strength of the book: the discussion and treatment of measures of model fit (and sensitivity of inferences) is lucid and enlightening. Some readers may wish the computational methods were spelled out more fully: this book will help you choose an appropriate statistical model, and the ways to look for serious violations of it, but it will take a bit of work to convert the ideas into computational algorithms. This is not to say that the computational methods aren't discussed, merely that many of the details are left to the reader. The reader expecting pseudo-code programs will be disappointed. All in all, I recommend this book for anyone who applies statistical models to data, whether those models are Bayesian or not. I especially recommend it for researchers who are curious about Bayesian methods but do not see the point of them---Chapter 5, and particularly section 5.5 (an example chosen from educational testing), beautifully addresses this issue.
Rating: Summary: Review by a user of the book and colleague of an author Review: First, I must admit a bias: I frequently work with one of the authors (Gelman), and I think highly of his work and statistical judgment. This book's biggest strength is its introduction of most of the important ideas in Bayesian statistics through well-chosen examples. These are examples are not contrived: many of them came up in research by the authors over the past several years. Most examples follow a logical progression that was probably used in the original research: a simple model is fit to data; then areas of model mis-fit are sought, and a revised model is used to address them. This brings up another strength of the book: the discussion and treatment of measures of model fit (and sensitivity of inferences) is lucid and enlightening. Some readers may wish the computational methods were spelled out more fully: this book will help you choose an appropriate statistical model, and the ways to look for serious violations of it, but it will take a bit of work to convert the ideas into computational algorithms. This is not to say that the computational methods aren't discussed, merely that many of the details are left to the reader. The reader expecting pseudo-code programs will be disappointed. All in all, I recommend this book for anyone who applies statistical models to data, whether those models are Bayesian or not. I especially recommend it for researchers who are curious about Bayesian methods but do not see the point of them---Chapter 5, and particularly section 5.5 (an example chosen from educational testing), beautifully addresses this issue.
Rating: Summary: A good introductory book, but... Review: I read the other reviews and agree with them to some extent. This is
a good introduction to applied Bayesian analysis. Lots of
good examples, illustrations and exercises.
If you are the kind of person who learns by way of examples, then
this might be the text book for you. If you are looking for the
bigger picture, then you will be lost here. There is very little in the way
of theory. Why is this the right method? What is gained theoretically
over a frequentist method? What are the theoretical properties of the
proposed approach? To a large extent these kinds of questions remain a mystery.
In terms of flexibility an applied Bayesian approach has some decided
advantages. However, in terms of theory
it's almost as if the authors want you to believe that once
you adopt the Bayesian approach then the benefits of averaging
by way of using a prior will always be the right thing to do.
You could argue that advanced questions like this are better suited for
a more advanced text book. I tend to ask more out of a book.
Rating: Summary: Likely the best survey book on applied Bayesian theory Review: Overview This book was the textbook used at the University of Wisconsin-Madison for the graduate course in Bayesian Decision and Control I during the fall of 2001 and 2002. It strikes a good balance between theory and practical example, making it ideal for a first course in Bayesian theory at an intermediate-advanced graduate level. Its emphasis is on Bayesian modeling and to some degree computation. Prerequisites While no Bayesian theory is assumed, it is assumed that the reader has a background in mathematical statistics, probability and continuous multi-variate distributions at a beginning or intermediate graduate level. The mathematics used in the book is basic probability and statistics, elementary calculus and linear algebra. Intended audience This book is primarily for graduate students, statisticians and applied researchers who wish to learn Bayesian methods as opposed to the more classical frequentist methods. Material covered It covers the fundamentals starting from first principles, single-parameter models, multi-parameter models, large sample inference, hierarchical models, model checking and sensitivity analysis, study design, regression models, generalized linear models, mixture models and models for missing data. In addition it covers posterior simulation and integration using rejection sampling and importance sampling. There is one chapter on Markov chain simulation (MCMC) covering the generalized Metropolis algorithm and the Gibbs sampler. Over 38 models are covered, 33 detailed examples from a wide range of fields (especially biostatistics). Each of the 18 chapter has a bibliographic note at the end. There are two appendixes: A) a very helpful list of standard probability distributions and B) outline of proofs of asymptotic theorems. Sixteen of the 18 chapters end with a set of exercises that range from easy to quite difficult. Most of the students in my fall 2001 class used the statistical language R to do the exercises. The book's emphasis is on applied Bayesian analysis. There are no heavy advanced proofs in the book. While the proofs of the basic algorithms are covered there are no algorithms written in pseudo code...Additional books of related interest 1) Statistical Decision Theory and Bayesian Analysis, James Berger, second edition. Emphasis on decision theory and more difficult to follow than Gelman's book. Covers empirical and hierarchical Bayes analysis. More philosophical challenging than Gelman's book. 2) Monte Carlo Statistical Methods, Robert and Casella. Very mathematically oriented book. Does a good job of covering MCMC. 3) Monte Carlo Methods in Bayesian Computation, Ming-Hui Chen, Qi-Man Shao, Joseph George Ibrahim. An enormous number of algorithms related to MCMC not covered elsewhere. If you need MCMC and need an algorithm to implement MCMC this is the book to read. 4) Monte Carlo Strategies in Scientific Computing, Jun S. Liu. Covers a wide range of scientific disciplines and how Monte Carlo methods can be used to solve real world problems. Includes hot topics such as bioinformatics. Very concise. Well written, but requires effort to understand as so many different topics are covered. This book is my most often borrowed book on Monte Carlo methods. Jun S. Liu is a big gun at Harvard. 5) Probabilistic Networks and Expert Systems. Cowell, Dawid, Lauritzen, Spiegelhalter. Covers the theory and methodology of building Bayesian networks (probabilistic networks).
Rating: Summary: Likely the best survey book on applied Bayesian theory Review: Overview This book was the textbook used at the University of Wisconsin-Madison for the graduate course in Bayesian Decision and Control I during the fall of 2001 and 2002. It strikes a good balance between theory and practical example, making it ideal for a first course in Bayesian theory at an intermediate-advanced graduate level. Its emphasis is on Bayesian modeling and to some degree computation. Prerequisites While no Bayesian theory is assumed, it is assumed that the reader has a background in mathematical statistics, probability and continuous multi-variate distributions at a beginning or intermediate graduate level. The mathematics used in the book is basic probability and statistics, elementary calculus and linear algebra. Intended audience This book is primarily for graduate students, statisticians and applied researchers who wish to learn Bayesian methods as opposed to the more classical frequentist methods. Material covered It covers the fundamentals starting from first principles, single-parameter models, multi-parameter models, large sample inference, hierarchical models, model checking and sensitivity analysis, study design, regression models, generalized linear models, mixture models and models for missing data. In addition it covers posterior simulation and integration using rejection sampling and importance sampling. There is one chapter on Markov chain simulation (MCMC) covering the generalized Metropolis algorithm and the Gibbs sampler. Over 38 models are covered, 33 detailed examples from a wide range of fields (especially biostatistics). Each of the 18 chapter has a bibliographic note at the end. There are two appendixes: A) a very helpful list of standard probability distributions and B) outline of proofs of asymptotic theorems. Sixteen of the 18 chapters end with a set of exercises that range from easy to quite difficult. Most of the students in my fall 2001 class used the statistical language R to do the exercises. The book's emphasis is on applied Bayesian analysis. There are no heavy advanced proofs in the book. While the proofs of the basic algorithms are covered there are no algorithms written in pseudo code...Additional books of related interest 1) Statistical Decision Theory and Bayesian Analysis, James Berger, second edition. Emphasis on decision theory and more difficult to follow than Gelman's book. Covers empirical and hierarchical Bayes analysis. More philosophical challenging than Gelman's book. 2) Monte Carlo Statistical Methods, Robert and Casella. Very mathematically oriented book. Does a good job of covering MCMC. 3) Monte Carlo Methods in Bayesian Computation, Ming-Hui Chen, Qi-Man Shao, Joseph George Ibrahim. An enormous number of algorithms related to MCMC not covered elsewhere. If you need MCMC and need an algorithm to implement MCMC this is the book to read. 4) Monte Carlo Strategies in Scientific Computing, Jun S. Liu. Covers a wide range of scientific disciplines and how Monte Carlo methods can be used to solve real world problems. Includes hot topics such as bioinformatics. Very concise. Well written, but requires effort to understand as so many different topics are covered. This book is my most often borrowed book on Monte Carlo methods. Jun S. Liu is a big gun at Harvard. 5) Probabilistic Networks and Expert Systems. Cowell, Dawid, Lauritzen, Spiegelhalter. Covers the theory and methodology of building Bayesian networks (probabilistic networks).
Rating: Summary: good treatment of modern Baysian methods Review: This is a well written text that is fast becoming a classic reference. It contains a wealth of good applications. It is one of the new books that presents the growing use of Bayesian methods in practice since the advancement of Markov Chain Monte Carlo approach. It includes a whole chapter the Markov chain approach to computation. Other strengths of the book include the chapter on missing data and the chapter that provides expert advice. Another text in the CRC series Markov Chain Monte Carlo in Practice by Gilks, Richardson and Spiegelhalter provides more detail on these methods along with many applications including some Bayesian ones.
<< 1 >>
|