Home :: Books :: Science  

Arts & Photography
Audio CDs
Audiocassettes
Biographies & Memoirs
Business & Investing
Children's Books
Christianity
Comics & Graphic Novels
Computers & Internet
Cooking, Food & Wine
Entertainment
Gay & Lesbian
Health, Mind & Body
History
Home & Garden
Horror
Literature & Fiction
Mystery & Thrillers
Nonfiction
Outdoors & Nature
Parenting & Families
Professional & Technical
Reference
Religion & Spirituality
Romance
Science

Science Fiction & Fantasy
Sports
Teens
Travel
Women's Fiction
Tools for Statistical Inference: Methods for the Exploration of Posterior Distributions and Likelihood Functions (Springer Series in Statistics)

Tools for Statistical Inference: Methods for the Exploration of Posterior Distributions and Likelihood Functions (Springer Series in Statistics)

List Price: $89.95
Your Price: $76.58
Product Info Reviews

<< 1 >>

Rating: 2 stars
Summary: A waste of time and money
Review: Don't be fooled by the appealing contents of the book. Many derivations are not clear as the author is jumping steps in the derivations with no explanations. very little attention is devoted to each topic making the coverage wide but neither deep nor clear.

As a substitue, I recommend Gelman et al. Bayesian data analysis which treats the same topics but is much more clear and better explained as well as more modern.
I got this book since this was a text book for a class, but ended up using other books (like Gleman's).

Rating: 5 stars
Summary: Good book on EM algorithm and Data Augmentation
Review: The book covers EM algorithm and Data Augmentation very good, and gives some info on MCMC. Very good reference book.

Rating: 5 stars
Summary: nice tools for likelihood and Bayesian inference
Review: This is a very well written text that is a particularly good reference on algorithms for the professional statistician. A nice feature is that it is concise and yet thorough. Many important problems in statistics are covered and presented through the deterministic and Monte Carlo techniques. Topics covered include the item response model, missing data and Bayesian methods. Most algorithms used to find maxima of posterior distributions can also be employed for maximizing likelihood. So those preferring classical inference can get a lot out of this book as well as the Bayesians.

The orientation is toward the Bayesian approach however, with good coverage of prior and posterior distributions, conjugate priors and Bayesian Hierarchical Models. The last chapter on Markov Chain Monte Carlo methods is mostly used for Bayesian inference.

This is a great reference source but can also be used in a graduate level course on mathematical statistics, probably as a supplemental text. There are many useful exercises in this edition. The book is fairly advanced and presupposes an introduction to mathematical statistics at the level of the text by Bickel and Doksum. It also assumes that the reader has had some introduction to Bayesian methods but only at the level of, say, Box and Tiao's text. It does not assume any knowledge of stochastic processes including Markov chains.

Convergence properties for the Markov Chain Monte Carlo algorithms (MCMC) are crucial to their success. Elements of discrete Markov chains are introduced in chapter 6 to make the algorithms understandable, but proof of convergence are avoided because they would involve a more detailed account of Markov chain theory.

Tanner provides a good list of the references that were available in 1996. The research in MCMC methods is continuing to be intense and so there are many good references that have appeared since the publication of this book. Robert and Casella (1999) provides a more detailed and more current treatment but even that book is a couple of years dated.

The EM and data augmentation algorthms are used for problems that are classified as missing data problems. The data may be missing as in a survey where particular questions are not answered by the respondents or it could be censored data as in a medical study or clinical trial. The censored data problem is illustrated by Tanner using the Stanford Heart Transplant data. Mixture models are also handled via these algorithms since the identification of the component that the observation belongs to can be viewed as missing data.

Tanner demonstrates a wide variety of techniques to handle many important problems and he illustrates them on real data. It is nice to have all of this compactly written in just 200 pages!


<< 1 >>

© 2004, ReviewFocus or its affiliates