人工智能
Learning From Data 豆瓣
10.0 (7 个评分) 作者: Yaser S. Abu-Mostafa / Malik Magdon-Ismail AMLBook 2012 - 3
Machine learning allows computational systems to adaptively improve their performance with experience accumulated from the observed data. Its techniques are widely applied in engineering, science, finance, and commerce. This book is designed for a short course on machine learning. It is a short course, not a hurried course. From over a decade of teaching this material, we have distilled what we believe to be the core topics that every student of the subject should know. We chose the title `learning from data' that faithfully describes what the subject is about, and made it a point to cover the topics in a story-like fashion. Our hope is that the reader can learn all the fundamentals of the subject by reading the book cover to cover. ---- Learning from data has distinct theoretical and practical tracks. In this book, we balance the theoretical and the practical, the mathematical and the heuristic. Our criterion for inclusion is relevance. Theory that establishes the conceptual framework for learning is included, and so are heuristics that impact the performance of real learning systems. ---- Learning from data is a very dynamic field. Some of the hot techniques and theories at times become just fads, and others gain traction and become part of the field. What we have emphasized in this book are the necessary fundamentals that give any student of learning from data a solid foundation, and enable him or her to venture out and explore further techniques and theories, or perhaps to contribute their own. ---- The authors are professors at California Institute of Technology (Caltech), Rensselaer Polytechnic Institute (RPI), and National Taiwan University (NTU), where this book is the main text for their popular courses on machine learning. The authors also consult extensively with financial and commercial companies on machine learning applications, and have led winning teams in machine learning competitions.
The Computational Brain 豆瓣
作者: Patricia Churchland / Terrence J. Sejnowski The MIT Press 1992 - 6
How do groups of neurons interact to enable the organism to see, decide, and move appropriately? What are the principles whereby networks of neurons represent and compute? These are the central questions probed by The Computational Brain. Churchland and Sejnowski address the foundational ideas of the emerging field of computational neuroscience, examine a diverse range of neural network models, and consider future directions of the field. The Computational Brain is the first unified and broadly accessible book to bring together computational concepts and behavioral data within a neurobiological framework.Computer models constrained by neurobiological data can help reveal how -networks of neurons subserve perception and behavior - bow their physical interactions can yield global results in perception and behavior, and how their physical properties are used to code information and compute solutions. The Computational Brain focuses mainly on three domains: visual perception, learning and memory, and sensorimotor integration. Examples of recent computer models in these domains are discussed in detail, highlighting strengths and weaknesses, and extracting principles applicable to other domains. Churchland and Sejnowski show how both abstract models and neurobiologically realistic models can have useful roles in computational neuroscience, and they predict the coevolution of models and experiments at many levels of organization, from the neuron to the system.The Computational Brain addresses a broad audience: neuroscientists, computer scientists, cognitive scientists, and philosophers. It is written for both the expert and novice. A basic overview of neuroscience and computational theory is provided, followed by a study of some of the most recent and sophisticated modeling work in the context of relevant neurobiological research. Technical terms are clearly explained in the text, and definitions are provided in an extensive glossary. The appendix contains a precis of neurobiological techniques.Patricia S. Churchland is Professor of Philosophy at the University of California, San Diego, Adjunct Professor at the Salk Institute, and a MacArthur Fellow. Terrence J. Sejnowski is Professor of Biology at the University of California, San Diego, Professor at the Salk Institute, where he is Director of the Computational Neurobiology Laboratory, and an Investigator of the Howard Hughes Medical Institute.
Learning in Graphical Models (Adaptive Computation and Machine Learning) 豆瓣
作者: Jordan, Michael I. 编 The MIT Press 1998 - 11
Graphical models, a marriage between probability theory and graph theory, provide a natural tool for dealing with two problems that occur throughout applied mathematics and engineering--uncertainty and complexity. In particular, they play an increasingly important role in the design and analysis of machine learning algorithms. Fundamental to the idea of a graphical model is the notion of modularity: a complex system is built by combining simpler parts. Probability theory serves as the glue whereby the parts are combined, ensuring that the system as a whole is consistent and providing ways to interface models to data. Graph theory provides both an intuitively appealing interface by which humans can model highly interacting sets of variables and a data structure that lends itself naturally to the design of efficient general-purpose algorithms.This book presents an in-depth exploration of issues related to learning within the graphical model formalism. Four chapters are tutorial chapters--Robert Cowell on Inference for Bayesian Networks, David MacKay on Monte Carlo Methods, Michael I. Jordan et al. on Variational Methods, and David Heckerman on Learning with Bayesian Networks. The remaining chapters cover a wide range of topics of current research interest.
Intention, Plans, and Practical Reason 豆瓣
作者: Michael E. Bratman Center for the Study of Language and Information 1987 - 3
What happens to our conception of mind and rational agency when we take seriously future-directed intentions and plans and their roles as inputs into further practical reasoning? The author's initial efforts in responding to this question resulted in a series of papers that he wrote during the early 1980s. In this book, Bratman develops further some of the main themes of these essays and also explores a variety of related ideas and issues. He develops a planning theory of intention. Intentions are treated as elements of partial plans of action. These plans play basic roles in practical reasoning, roles that support the organization of our activities over time and socially. Bratman explores the impact of this approach on a wide range of issues, including the relation between intention and intentional action, and the distinction between intended and expected effects of what one intends.
Words, Thoughts, and Theories 豆瓣
作者: Alison Gopnik / Andrew N. Meltzoff The MIT Press 1998 - 7
Words, Thoughts, and Theories articulates and defends the "theory theory" of cognitive and semantic development, the idea that infants and young children, like scientists, learn about the world by forming and revising theories, a view of the origins of knowledge and meaning that has broad implications for cognitive science.Gopnik and Meltzoff interweave philosophical arguments and empirical data from their own and other's research. Both the philosophy and the psychology, the arguments and the data, address the same fundamental epistemological question: How do we come to understand the world around us?Recently, the theory theory has led to much interesting research. However, this is the first book to look at the theory in extensive detail and to systematically contrast it with other theories. It is also the first to apply the theory to infancy and early childhood, to use the theory to provide a framework for understanding semantic development, and to demonstrate that language acquisition influences theory change in children.The authors show that children just beginning to talk are engaged in profound restructurings of several domains of knowledge. These restructurings are similar to theory changes in science, and they influence children's early semantic development, since children's cognitive concerns shape and motivate their use of very early words. But, in addition, children pay attention to the language they hear around them and this too reshapes their cognition, and causes them to reorganize their theories.
Parallel Distributed Processing, Vol. 1 豆瓣
作者: David E. Rumelhart / James L. McClelland A Bradford Book 1987 - 7
What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind. The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network. Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.
Parallel Distributed Processing, Vol. 2 豆瓣
作者: James L. McClelland / David E. Rumelhart The MIT Press 1987 - 7
What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind. The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network. Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.
Categorization and Naming in Children 豆瓣
作者: Ellen M Markman A Bradford Book 1991 - 5
In this landmark work on early conceptual and lexical development, Ellen Markman explores the fascinating problem of how young children succeed at the task of inducing concepts. Backed by extensive experimental results, she challenges the fundamental assumptions of traditional theories of language acquisition and proposes that a set of constraints or principles of induction allows children to efficiently integrate knowledge and to induce information about new examples of familiar categories.Ellen M. Markman is Professor of Psychology at Stanford University.
The Innocent Eye 豆瓣
作者: Nico Orlandi Oxford University Press 2014 - 8
Why does the world look to us as it does? Generally speaking, this question has received two types of answers in the cognitive sciences in the past fifty or so years. According to the first, the world looks to us the way it does because we construct it to look as it does. According to the second, the world looks as it does primarily because of how the world is. In The Innocent Eye, Nico Orlandi defends a position that aligns with this second, world-centered tradition, but that also respects some of the insights of constructivism. Orlandi develops an embedded understanding of visual processing according to which, while visual percepts are representational states, the states and structures that precede the production of percepts are not representations.
If we study the environmental contingencies in which vision occurs, and we properly distinguish functional states and features of the visual apparatus from representational states and features, we obtain an empirically more plausible, world-centered account. Orlandi shows that this account accords well with models of vision in perceptual psychology -- such as Natural Scene Statistics and Bayesian approaches to perception -- and outlines some of the ways in which it differs from recent 'enactive' approaches to vision. The main difference is that, although the embedded account recognizes the importance of movement for perception, it does not appeal to action to uncover the richness of visual stimulation.
The upshot is that constructive models of vision ascribe mental representations too liberally, ultimately misunderstanding the notion. Orlandi offers a proposal for what mental representations are that, following insights from Brentano, James and a number of contemporary cognitive scientists, appeals to the notions of de-coupleability and absence to distinguish representations from mere tracking states.
Theoretical Neuroscience 豆瓣
作者: Peter Dayan / Laurence F. Abbott The MIT Press 2005 - 9
Theoretical neuroscience provides a quantitative basis for describing what nervous systems do, determining how they function, and uncovering the general principles by which they operate. This text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision, sensory-motor integration, development, learning, and memory.The book is divided into three parts. Part I discusses the relationship between sensory stimuli and neural responses, focusing on the representation of information by the spiking activity of neurons. Part II discusses the modeling of neurons and neural circuits on the basis of cellular and synaptic biophysics. Part III analyzes the role of plasticity in development and learning. An appendix covers the mathematical methods used, and exercises are available on the book's Web site.
Spiking Neuron Models 豆瓣
作者: Wulfram Gerstner Cambridge University Press 2002 - 8
Neurons in the brain communicate by short electrical pulses, the so-called action potentials or spikes. How can we understand the process of spike generation? How can we understand information transmission by neurons? What happens if thousands of neurons are coupled together in a seemingly random network? How does the network connectivity determine the activity patterns? And, vice versa, how does the spike activity influence the connectivity pattern? These questions are addressed in this 2002 introduction to spiking neurons aimed at those taking courses in computational neuroscience, theoretical biology, biophysics, or neural networks. The approach will suit students of physics, mathematics, or computer science; it will also be useful for biologists who are interested in mathematical modelling. The text is enhanced by many worked examples and illustrations. There are no mathematical prerequisites beyond what the audience would meet as undergraduates: more advanced techniques are introduced in an elementary, concrete fashion when needed.
Data Mining, Fourth Edition: Practical Machine Learning Tools and Techniques (Morgan Kaufmann Series in Data Management Systems) 豆瓣
作者: Ian H. Witten / Eibe Frank Morgan Kaufmann 2016
Data Mining: Practical Machine Learning Tools and Techniques, Fourth Edition, offers a thorough grounding in machine learning concepts, along with practical advice on applying these tools and techniques in real-world data mining situations. This highly anticipated fourth edition of the most acclaimed work on data mining and machine learning teaches readers everything they need to know to get going, from preparing inputs, interpreting outputs, evaluating results, to the algorithmic methods at the heart of successful data mining approaches.
Extensive updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including substantial new chapters on probabilistic methods and on deep learning. Accompanying the book is a new version of the popular WEKA machine learning software from the University of Waikato. Authors Witten, Frank, Hall, and Pal include today's techniques coupled with the methods at the leading edge of contemporary research.
Provides a thorough grounding in machine learning concepts, as well as practical advice on applying the tools and techniques to data mining projectsPresents concrete tips and techniques for performance improvement that work by transforming the input or output in machine learning methodsIncludes a downloadable WEKA software toolkit, a comprehensive collection of machine learning algorithms for data mining tasks-in an easy-to-use interactive interfaceIncludes open-access online courses that introduce practical applications of the material in the book
Principles of Statistical Inference 豆瓣
作者: D. R. Cox Cambridge University Press 2006 - 8
In this definitive book, D. R. Cox gives a comprehensive and balanced appraisal of statistical inference. He develops the key concepts, describing and comparing the main ideas and controversies over foundational issues that have been keenly argued for more than two-hundred years. Continuing a sixty-year career of major contributions to statistical thought, no one is better placed to give this much-needed account of the field. An appendix gives a more personal assessment of the merits of different ideas. The content ranges from the traditional to the contemporary. While specific applications are not treated, the book is strongly motivated by applications across the sciences and associated technologies. The mathematics is kept as elementary as feasible, though previous knowledge of statistics is assumed. The book will be valued by every user or student of statistics who is serious about understanding the uncertainty inherent in conclusions from statistical analyses.