機器學習
Reinforcement Learning 豆瓣
作者: Richard S. Sutton / Andrew G. Barto The MIT Press 1998 - 3
Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability.The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.
Vision 豆瓣
作者: David Marr The MIT Press 2010 - 7
David Marr's posthumously published Vision (1982) influenced a generation of brain and cognitive scientists, inspiring many to enter the field. In Vision, Marr describes a general framework for understanding visual perception and touches on broader questions about how the brain and its functions can be studied and understood. Researchers from a range of brain and cognitive sciences have long valued Marr's creativity, intellectual power, and ability to integrate insights and data from neuroscience, psychology, and computation. This MIT Press edition makes Marr's influential work available to a new generation of students and scientists. In Marr's framework, the process of vision constructs a set of representations, starting from a description of the input image and culminating with a description of three-dimensional objects in the surrounding environment. A central theme, and one that has had far-reaching influence in both neuroscience and cognitive science, is the notion of different levels of analysis--in Marr's framework, the computational level, the algorithmic level, and the hardware implementation level. Now, thirty years later, the main problems that occupied Marr remain fundamental open problems in the study of perception. Vision provides inspiration for the continuing efforts to integrate knowledge from cognition and computation to understand vision and the brain.
Semi-Supervised Learning 豆瓣
作者: Olivier Chapelle Mit Press 2006 - 9
In the field of machine learning, semi-supervised learning (SSL) occupies the middle ground, between supervised learning (in which all training examples are labeled) and unsupervised learning (in which no label data are given). Interest in SSL has increased in recent years, particularly because of application domains in which unlabeled data are plentiful, such as images, text, and bioinformatics. This first comprehensive overview of SSL presents state-of-the-art algorithms, a taxonomy of the field, selected applications, benchmark experiments, and perspectives on ongoing and future research.Semi-Supervised Learning first presents the key assumptions and ideas underlying the field: smoothness, cluster or low-density separation, manifold structure, and transduction. The core of the book is the presentation of SSL methods, organized according to algorithmic strategies. After an examination of generative models, the book describes algorithms that implement the low-density separation assumption, graph-based methods, and algorithms that perform two-step learning. The book then discusses SSL applications and offers guidelines for SSL practitioners by analyzing the results of extensive benchmark experiments. Finally, the book looks at interesting directions for SSL research. The book closes with a discussion of the relationship between semi-supervised learning and transduction.Olivier Chapelle and Alexander Zien are Research Scientists and Bernhard Scholkopf is Professor and Director at the Max Planck Institute for Biological Cybernetics in Tubingen. Scholkopf is coauthor of Learning with Kernels (MIT Press, 2002) and is a coeditor of Advances in Kernel Methods: Support Vector Learning (1998), Advances in Large-Margin Classifiers (2000), and Kernel Methods in Computational Biology (2004), all published by The MIT Press.
Unsupervised Learning 豆瓣
A Bradford Book 1999 - 6
Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computationcollects, by topic, the most significant papers that have appeared in the journal over the past nine years.This volume of Foundations of Neural Computation, on unsupervised learning algorithms, focuses on neural network learning algorithms that do not require an explicit teacher. The goal of unsupervised learning is to extract an efficient internal representation of the statistical structure implicit in the inputs. These algorithms provide insights into the development of the cerebral cortex and implicit learning in humans. They are also of interest to engineers working in areas such as computer vision and speech recognition who seek efficient representations of raw input data.
Data Science for Business 豆瓣
作者: Foster Provost / Tom Fawcett O'Reilly Media 2013 - 8
Review
"A must-read resource for anyone who is serious about embracing the opportunity of big data."
-- Craig Vaughan
Global Vice President at SAP
"This book goes beyond data analytics 101. It's the essential guide for those of us (all of us?) whose businesses are built on the ubiquity of data opportunities and the new mandate for data-driven decision-making."
--Tom Phillips
CEO of Media6Degrees and Former Head of Google Search and Analytics
"Data is the foundation of new waves of productivity growth, innovation, and richer customer insight. Only recently viewed broadly as a source of competitive advantage, dealing well with data is rapidly becoming table stakes to stay in the game. The authors' deep applied experience makes this a must read--a window into your competitor's strategy."
-- Alan Murray
Serial Entrepreneur; Partner at Coriolis Ventures
"This timely book says out loud what has finally become apparent: in the modern world, Data is Business, and you can no longer think business without thinking data. Read this book and you will understand the Science behind thinking data."
-- Ron Bekkerman
Chief Data Officer at Carmel Ventures
"A great book for business managers who lead or interact with data scientists, who wish to better understand the principles and algorithms available without the technical details of single-disciplinary books."
-- Ronny Kohavi
Partner Architect at Microsoft Online Services Division
About the Author
Foster Provost is Professor and NEC Faculty Fellow at the NYU Stern School of Business where he teaches in the MBA, Business Analytics, and Data Science programs. His award-winning research is read and cited broadly. Prof. Provost has co-founded several successful companies focusing on data science for marketing.
Tom Fawcett holds a Ph.D. in machine learning and has worked in industry R&D for more than two decades for companies such as GTE Laboratories, NYNEX/Verizon Labs, and HP Labs. His published work has become standard reading in data science.
Statistical Decision Theory and Bayesian Analysis 豆瓣
作者: James O. Berger Springer 1993 - 3
In this new edition the author has added substantial material on Bayesian analysis, including lengthy new sections on such important topics as empirical and hierarchical Bayes analysis, Bayesian calculation, Bayesian communication, and group decision making. With these changes, the book can be used as a self-contained introduction to Bayesian analysis. In addition, much of the decision-theoretic portion of the text was updated, including new sections covering such modern topics as minimax multivariate (Stein) estimation.
Machine Learning 豆瓣
作者: Thomas Mitchell McGraw-Hill Education 1997 - 7
This book covers the field of machine learning, which is the study of algorithms that allow computer programs to automatically improve through experience. The book is intended to support upper level undergraduate and introductory level graduate courses in machine learning.
All of Statistics 豆瓣
作者: Larry Wasserman Springer 2004 - 10
WINNER OF THE 2005 DEGROOT PRIZE! This book is for people who want to learn probability and statistics quickly. It brings together many of the main ideas in modern statistics in one place. The book is suitable for students and researchers in statistics, computer science, data mining and machine learning. This book covers a much wider range of topics than a typical introductory text on mathematical statistics. It includes modern topics like nonparametric curve estimation, bootstrapping and classification, topics that are usually relegated to follow-up courses. The reader is assumed to know calculus and a little linear algebra. No previous knowledge of probability and statistics is required. The text can be used at the advanced undergraduate and graduate level.
Time Series Analysis by State Space Methods 豆瓣
作者: James Durbin / Siem Jan Koopman Clarendon Press 2001 - 6
This excellent text provides a comprehensive treatment of the state space approach to time series analysis. The distinguishing feature of state space time series models is that observations are regarded as made up of distinct components such as trend, seasonal, regression elements and disturbence terms, each of which is modelled separately. The techniques that emerge from this approach are very flexible and are capable of handling a much wider range of problems than the main analytical system currently in use for time series analysis, the Box-Jenkins ARIMA system. The book provides an excellent source for the development of practical courses on time series analysis.
Computational Statistics 豆瓣
作者: Geof H. Givens / Jennifer A. Hoeting Wiley 2012 - 11
Retaining the general organization and style of its predecessor, this new edition continues to serve as a comprehensive guide to modern and classical methods of statistical computing and computational statistics. Approaching the topic in three major parts--optimization, integration, and smoothing--the book includes an overview section in each chapter introduction and step-by-step implementation summaries to accompany the explanations of key methods; expanded coverage of Monte Carlo sampling and MCMC; a chapter on Alternative Viewpoints; a related Web site; new exercises; and more.
The Subjectivity of Scientists and the Bayesian Approach 豆瓣
作者: S. James Press / Judith M. Tanur Wiley-Interscience 2001 - 4
Comparing and contrasting the reality of subjectivity in the work of history's great scientists and the modern Bayesian approach to statistical analysis Scientists and researchers are taught to analyze their data from an objective point of view, allowing the data to speak for themselves rather than assigning them meaning based on expectations or opinions. But scientists have never behaved fully objectively. Throughout history, some of our greatest scientific minds have relied on intuition, hunches, and personal beliefs to make sense of empirical data-and these subjective influences have often aided in humanity's greatest scientific achievements. The authors argue that subjectivity has not only played a significant role in the advancement of science, but that science will advance more rapidly if the modern methods of Bayesian statistical analysis replace some of the classical twentieth-century methods that have traditionally been taught. To accomplish this goal, the authors examine the lives and work of history's great scientists and show that even the most successful have sometimes misrepresented findings or been influenced by their own preconceived notions of religion, metaphysics, and the occult, or the personal beliefs of their mentors. Contrary to popular belief, our greatest scientific thinkers approached their data with a combination of subjectivity and empiricism, and thus informally achieved what is more formally accomplished by the modern Bayesian approach to data analysis. Yet we are still taught that science is purely objective. This innovative book dispels that myth using historical accounts and biographical sketches of more than a dozen great scientists, including Aristotle, Galileo Galilei, Johannes Kepler, William Harvey, Sir Isaac Newton, Antoine Levoisier, Alexander von Humboldt, Michael Faraday, Charles Darwin, Louis Pasteur, Gregor Mendel, Sigmund Freud, Marie Curie, Robert Millikan, Albert Einstein, Sir Cyril Burt, and Margaret Mead. Also included is a detailed treatment of the modern Bayesian approach to data analysis. Up-to-date references to the Bayesian theoretical and applied literature, as well as reference lists of the primary sources of the principal works of all the scientists discussed, round out this comprehensive treatment of the subject. Readers will benefit from this cogent and enlightening view of the history of subjectivity in science and the authors' alternative vision of how the Bayesian approach should be used to further the cause of science and learning well into the twenty-first century.
Fundamentals of Kalman Filtering 豆瓣
作者: Paul Zarchan / Howard Musoff AIAA (American Institute of Aeronautics & Ast 2009 - 9
This is a practical guide to building Kalman filters that shows how the filtering equations can be applied to real-life problems. Numerous examples are presented in detail, showing the many ways in which Kalman filters can be designed. Computer code written in FORTRAN, MATLAB[registered], and True BASIC accompanies all of the examples so that the interested reader can verify concepts and explore issues beyond the scope of the text. In certain instances, the authors intentionally introduce mistakes to the initial filter designs to show the reader what happens when the filter is not working properly. The text carefully sets up a problem before the Kalman filter is actually formulated, to give the reader an intuitive feel for the problem being addressed. Because real problems are seldom presented as differential equations, and usually do not have unique solutions, the authors illustrate several different filtering approaches. Readers will gain experience in software and performance tradeoffs for determining the best filtering approach. The material that has been added to this edition is in response to questions and feedback from readers. The third edition has three new chapters on unusual topics related to Kalman filtering and other filtering techniques based on the method of least squares. Chapter 17 presents a type of filter known as the fixed or finite memory filter, which only remembers a finite number of measurements from the past. Chapter 18 shows how the chain rule from calculus can be used for filter initialization or to avoid filtering altogether. A realistic three-dimensional GPS example is used to illustrate the chain-rule method for filter initialization. Finally, Chapter 19 shows how a bank of linear sine-wave Kalman filters, each one tuned to a different sine-wave frequency, can be used to estimate the actual frequency of noisy sinusoidal measurements and obtain estimates of the states of the sine wave when the measurement noise is low.
Handbook of Latent Semantic Analysis 豆瓣
作者: Thomas K. Landauer / Danielle S. McNamara Lawrence Erlbaum 2007 - 2
The Handbook of Latent Semantic Analysis is the authoritative reference for the theory behind Latent Semantic Analysis (LSA), a burgeoning mathematical method used to analyze how words make meaning, with the desired outcome to program machines to understand human commands via natural language rather than strict programming protocols. The first book of its kind to deliver such a comprehensive analysis, this volume explores every area of the method and combines theoretical implications as well as practical matters of LSA. Readers are introduced to a powerful new way of understanding language phenomena, as well as innovative ways to perform tasks that depend on language or other complex systems. The Handbook clarifies misunderstandings and pre-formed objections to LSA, and provides examples of exciting new educational technologies made possible by LSA and similar techniques. It raises issues in philosophy, artificial intelligence, and linguistics, while describing how LSA has underwritten a range of educational technologies and information systems. Alternate approaches to language understanding are addressed and compared to LSA. This work is essential reading for anyone-newcomers to this area and experts alike-interested in how human language works or interested in computational analysis and uses of text. Educational technologists, cognitive scientists, philosophers, and information technologists in particular will consider this volume especially useful.
Pattern Classification 豆瓣
作者: Richard O. Duda / Peter E. Hart Wiley-Interscience 2000 - 11
The first edition, published in 1973, has become a classic reference in the field. Now with the second edition, readers will find information on key new topics such as neural networks and statistical pattern recognition, the theory of machine learning, and the theory of invariances. Also included are worked examples, comparisons between different methods, extensive graphics, expanded exercises and computer project topics. An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.
Introduction To The Theory Of Neural Computation, Volume I 豆瓣
作者: John A. Hertz Westview Press 1991 - 6
This book comprehensively discusses the neural network models from a statistical mechanics perspective. It starts with one of the most influential developments in the theory of neural networks: Hopfield's analysis of networks with symmetric connections using the spin system approach and using the notion of an energy function from physics. Introduction to the Theory of Neural Computation uses these powerful tools to analyze neural networks as associative memory stores and solvers of optimization problems. A detailed analysis of multi-layer networks and recurrent networks follow. The book ends with chapters on unsupervised learning and a formal treatment of the relationship between statistical mechanics and neural networks. Little information is provided about applications and implementations, and the treatment of the material reflects the background of the authors as physicists. However the book is essential for a solid understanding of the computational potential of neural networks. Introduction to the Theory of Neural Computation assumes that the reader is familiar with undergraduate level mathematics, but does not have any background in physics. All of the necessary tools are introduced in the book.
Learning with Kernels 豆瓣
作者: Bernhard Schlkopf / Alexander J. Smola The MIT Press 2001
In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs -- -kernels--for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics.Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.