Arts & Photography
Audio CDs
Audiocassettes
Biographies & Memoirs
Business & Investing
Children's Books
Christianity
Comics & Graphic Novels
Computers & Internet
Cooking, Food & Wine
Entertainment
Gay & Lesbian
Health, Mind & Body
History
Home & Garden
Horror
Literature & Fiction
Mystery & Thrillers
Nonfiction
Outdoors & Nature
Parenting & Families
Professional & Technical
Reference
Religion & Spirituality
Romance
Science
Science Fiction & Fantasy
Sports
Teens
Travel
Women's Fiction
|
![Systems That Learn - 2nd Edition: An Introduction to Learning Theory (Learning, Development, and Conceptual Change)](http://images.amazon.com/images/P/0262100770.01.MZZZZZZZ.jpg) |
Systems That Learn - 2nd Edition: An Introduction to Learning Theory (Learning, Development, and Conceptual Change) |
List Price: $50.00
Your Price: $50.00 |
![](http://www.reviewfocus.com/images/buy-from-tan.gif) |
|
|
Product Info |
Reviews |
<< 1 >>
Rating: ![4 stars](http://www.reviewfocus.com/images/stars-4-0.gif) Summary: Insightful introduction to computational learning theory Review: Concentrating on the mathematical and formal underpinnings of learning theory, this book gives a very interesting and general overview of the subject. Basing their discussion on the learning of "texts" and the learning of "functions", the authors address the main issues in the formal modeling of empirical inquiry. The models or "paradigms" they construct are based on five concepts, which they consider of central importance in empirical inquiry. These concepts are: (1) A reality that is theoretically possible; (2) hypotheses that are intelligible; (3) the data that is available about a given reality; (4) a model of a scientist; (5) the successful behavior of the scientist who is investigating a possible reality. The scientist is thought of as playing a game with Nature, with the class of possible realities being known to each of them initially. Nature selects a member from this class, initially unknown to the scientist. After providing a series of clues (data) to the scientist, the scientist forms hypotheses based on these clues. The scientist wins the game if the hypotheses become stable and accurate. The game is easier to win the more constrained Nature's choice of actual world is.
After a brief philosophical discussion of the paradigms and a review of the theory of computation, the authors begin their study by concentrating on identification of languages and identification of functions. Both of these are considered to be `theoretically possible realities', and in the case of languages, it is "texts" that are to be identified by scientists. Those texts that can are called `identifiable' and the authors prove a theorem that characterizes how scientists identify languages in terms of finite strings of text. Success in function identification is cast as a generalization of that of text. In this case the class of possible realities are the collection of total recursive functions, and hypotheses are programs that compute these functions. The authors show however that a scientist who identifies the entire class of recursive functions cannot be computable. Very interesting in this discussion is the treatment of `parametrized scientists', i.e. those scientists who can incorporate background knowledge from other scientists.
These considerations involve the view of a scientist as being a certain fixed entity. The authors also consider cases where alternative notions of scientist occur, but the other paradigms are held fixed. The abilities of computable scientists who deploy different inductive `strategies' are studied, with the goal of finding out if a member of a particular strategy can effectively identify languages or functions. Recognizing that the conjectures proposed successively by a particular scientist may not be related to one another, the authors then discuss strategies that result from imposing relations between the conjectures. One of these, called `conservative', insists that a conjecture that generates all the data observed to date should never be abandoned. Also discussed are `generalization strategies' that require scientists to improve upon their successive conjectures. One example of these strategies is called `strong-monotonic', which forbids the revision of a hypothesis if it made a mistake in identification. Another example is called `weak monotonic', which allows the rejection of parts of a hypothesis if it encounters data that cannot be accounted for by this hypothesis. Still another is `monotonic', which allows the correction of mistaken hypotheses, but does not allow hypotheses that will contradict correct classifications. The authors show that monotonicity does not imply weak-monotonicity, and vice versa. Also discussed are `specialization' strategies, which are "dual" to the three generalization strategies, and which involve the pruning of hypotheses in order to obtain convergence.
The authors also address the case where the conception of a scientist is held fixed, but the criteria for scientific success are varied. This study, in the opinion of this reviewer, more accurately reflects the real behavior of scientists, who typically use very liberal notions of accuracy. For example, anomalies in data could be tolerated, pending alteration of the hypotheses in the future. These anomalies in fact serve to drive further research, with the goal of finding hypotheses or theories that resolve them. It is typically the case, if not always, that the hypotheses are considered approximate explanations, and so one would expect that the authors' discussion would revolve around the consideration of inference of approximations. The authors though do give an interesting twist to this discussion, namely, they attempt to find criteria for success that actually permit an infinite number of anomalies in the final explanations. This serves to better characterize explanations, they argue. A series of identification criteria are outlined each of which involves measure-theoretic notions of `asymptotic agreement.' A scientist presented with a function must arrive at an explanation that agrees asymptotically with the function up to a prespecified amount.
Also more realistic, due to its emphasis on what happens in actual scientific investigation, is the authors' discussion on alternative conceptions of available data. Noting that data can be missing or have errors, and is presented in a definite order to a scientist, the authors study how to deal with error in the finding of intelligible hypotheses. Their results delineate the extent to which inaccurate data can impede the learning process, with three kinds of "inaccurate" data considered: "incomplete", "noisy", and "imperfect."
Other topics discussed include the modeling of empirical inquiry when many scientists are collaborating with each other, and that of probabilistic learning. For team identification of functions, several interesting results are proven, but the authors admit that their results do not apply to the (more realistic) scenario where the hypotheses of each individual scientist influence each other. Also discussed are "oracle" scientists, who use information of a noncomputable nature, or "information oracles", in order to perform identifications. When judged by how much information can be given to the scientist, oracles can be "omniscient" or "trivial", and it is thus of interest to determine how much oracles can supply scientists in their identification of functions. The authors discuss various results on this topic, showing how much is to be gained by allowing oracle scientists to make additional queries.
<< 1 >>
|
|
|
|