人工神經網絡
Neural Networks 豆瓣
作者: Raul Rojas Springer 1996 - 1
Neural networks are a computing paradigm that is finding increasing attention among computer scientists. In this book, theoretical laws and models previously scattered in the literature are brought together into a general theory of artificial neural nets. Always with a view to biology and starting with the simplest nets, it is shown how the properties of models change when more general computing elements and net topologies are introduced. Each chapter contains examples, numerous illustrations, and a bibliography. The book is aimed at readers who seek an overview of the field or who wish to deepen their knowledge. It is suitable as a basis for university courses in neurocomputing.
Bayesian Learning for Neural Networks 豆瓣
作者: Radford M. Neal Springer 1996 - 8
Artificial "neural networks" are widely used as flexible models for classification and regression applications, but questions remain about how the power of these models can be safely exploited when training data is limited. This book demonstrates how Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional training methods. Insight into the nature of these complex Bayesian models is provided by a theoretical investigation of the priors over functions that underlie them. A practical implementation of Bayesian neural network learning using Markov chain Monte Carlo methods is also described, and software for it is freely available over the Internet. Presupposing only basic knowledge of probability and statistics, this book should be of interest to researchers in statistics, engineering, and artificial intelligence.
Deep Learning 豆瓣 Goodreads
Deep Learning
9.7 (7 个评分) 作者: Ian Goodfellow / Yoshua Bengio The MIT Press 2016 - 11
"Written by three experts in the field, Deep Learning is the only comprehensive book on the subject." -- Elon Musk, co-chair of OpenAI; co-founder and CEO of Tesla and SpaceX
Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning.
The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models.
Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.
Principles of Neural Science 豆瓣
作者: Eric R. Kandel McGraw-Hill Medical 2012 - 10
This title now updated: the definitive neuroscience resource-from Eric R. Kandel, MD (winner of the Nobel Prize in 2000); James H. Schwartz, MD, PhD; Thomas M. Jessell, PhD; Steven A. Siegelbaum, PhD; and A. J. Hudspeth, PhD 900 full-color illustrations. Deciphering the link between the human brain and behavior has always been one of the most intriguing - and often challenging-aspects of scientific endeavor. The sequencing of the human genome, and advances in molecular biology, have illuminated the pathogenesis of many neurological diseases and have propelled our knowledge of how the brain controls behavior. To grasp the wider implications of these developments and gain a fundamental understanding of this dynamic, fast-moving field, Principles of Neuroscience stands alone as the most authoritative and indispensible resource of its kind. In this classic text, prominent researchers in the field expertly survey the entire spectrum of neural science, giving an up-to-date, unparalleled view of the discipline for anyone who studies brain and mind. Here, in one remarkable volume, is the current state of neural science knowledge - ranging from molecules and cells, to anatomic structures and systems, to the senses and cognitive functions-all supported by more than 900 precise, full-color illustrations. In addition to clarifying complex topics, the book also benefits from a cohesive organization, beginning with an insightful overview of the interrelationships between the brain, nervous system, genes, and behavior. Principles of Neural Science then proceeds with an in-depth examination of the molecular and cellular biology of nerve cells, synaptic transmission, and the neural basis of cognition. The remaining sections illuminate how cells, molecules, and systems give us sight, hearing, touch, movement, thought, learning, memories, and emotions. The new fifth edition of Principles of Neural Science is thoroughly updated to reflect the tremendous amount of research, and the very latest clinical perspectives, that have significantly transformed the field within the last decade. Ultimately, Principles of Neural Science affirms that all behavior is an expression of neural activity, and that the future of clinical neurology and psychiatry hinges on the progress of neural science. Far exceeding the scope and scholarship of similar texts, this unmatched guide offers a commanding, scientifically rigorous perspective on the molecular mechanisms of neural function and disease-one that you'll continually rely on to advance your comprehension of brain, mind, and behavior. Features: the cornerstone reference in the field of neuroscience that explains how the nerves, brain, and mind function; clear emphasis on how behavior can be examined through the electrical activity of both individual neurons and systems of nerve cells; current focus on molecular biology as a tool for probing the pathogenesis of many neurological diseases, including muscular dystrophy, Huntington disease, and certain forms of Alzheimer's disease; more than 900 engaging full-color illustrations - including line drawings, radiographs, micrographs, and medical photographs clarify often-complex neuroscience concepts; outstanding section on the development and emergence of behavior, including important coverage of brain damage repair, the sexual differentiation of the nervous system, and the aging brain. Features: more detailed discussions of cognitive and behavioral functions, and an expanded review of cognitive processes; a focus on the increasing importance of computational neural science, which enhances our ability to record the brain's electrical activity and study cognitive processes more directly; and chapter-opening. Key concepts: provides a convenient, study-enhancing introduction to the material covered in each chapter; selected readings and full reference citations at the close of each chapter facilitate further study and research; and helpful appendices highlight basic circuit theory; the neurological examination of the patient; circulation of the brain; the blood-brain barrier, choroid plexus, and cerebrospinal fluid; neural networks; and theoretical approaches to neuroscience.
TensorFlow for Machine Intelligence: A Hands-On Introduction to Learning Algorithms 豆瓣
作者: Sam Abrahams / Danijar Hafner Bleeding Edge Press 2016 - 11
TensorFlow, a popular library for machine learning, embraces the innovation and community-engagement of open source, but has the support, guidance, and stability of a large corporation. Because of its multitude of strengths, TensorFlow is appropriate for individuals and businesses ranging from startups to companies as large as, well, Google. TensorFlow is currently being used for natural language processing, artificial intelligence, computer vision, and predictive analytics. TensorFlow, open sourced to the public by Google in November 2015, was made to be flexible, efficient, extensible, and portable. Computers of any shape and size can run it, from smartphones all the way up to huge computing clusters. This book is for anyone who knows a little machine learning (or not) and who has heard about TensorFlow, but found the documentation too daunting to approach. It introduces the TensorFlow framework and the underlying machine learning concepts that are important to harness machine intelligence. After reading this book, you should have a deep understanding of the core TensorFlow API.
Learning and Memory 豆瓣
作者: Mark A. Gluck / Eduardo Mercado Worth Publishers 2013 - 1
Rigorously updated, with a new modular format, the second edition of Learning and Memory brings a modern perspective to the study of this key topic. Reflecting the growing importance of neuroscience in the field, it compares brain studies and behavioural approaches in human and other animal species, and is available in full-color throughout.
Unsupervised Learning 豆瓣
A Bradford Book 1999 - 6
Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computationcollects, by topic, the most significant papers that have appeared in the journal over the past nine years.This volume of Foundations of Neural Computation, on unsupervised learning algorithms, focuses on neural network learning algorithms that do not require an explicit teacher. The goal of unsupervised learning is to extract an efficient internal representation of the statistical structure implicit in the inputs. These algorithms provide insights into the development of the cerebral cortex and implicit learning in humans. They are also of interest to engineers working in areas such as computer vision and speech recognition who seek efficient representations of raw input data.
Pattern Classification 豆瓣
作者: Richard O. Duda / Peter E. Hart Wiley-Interscience 2000 - 11
The first edition, published in 1973, has become a classic reference in the field. Now with the second edition, readers will find information on key new topics such as neural networks and statistical pattern recognition, the theory of machine learning, and the theory of invariances. Also included are worked examples, comparisons between different methods, extensive graphics, expanded exercises and computer project topics. An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.
Introduction To The Theory Of Neural Computation, Volume I 豆瓣
作者: John A. Hertz Westview Press 1991 - 6
This book comprehensively discusses the neural network models from a statistical mechanics perspective. It starts with one of the most influential developments in the theory of neural networks: Hopfield's analysis of networks with symmetric connections using the spin system approach and using the notion of an energy function from physics. Introduction to the Theory of Neural Computation uses these powerful tools to analyze neural networks as associative memory stores and solvers of optimization problems. A detailed analysis of multi-layer networks and recurrent networks follow. The book ends with chapters on unsupervised learning and a formal treatment of the relationship between statistical mechanics and neural networks. Little information is provided about applications and implementations, and the treatment of the material reflects the background of the authors as physicists. However the book is essential for a solid understanding of the computational potential of neural networks. Introduction to the Theory of Neural Computation assumes that the reader is familiar with undergraduate level mathematics, but does not have any background in physics. All of the necessary tools are introduced in the book.
Learning with Kernels 豆瓣
作者: Bernhard Schlkopf / Alexander J. Smola The MIT Press 2001
In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs -- -kernels--for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics.Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.
Neural Networks for Pattern Recognition 豆瓣
作者: Christopher M. Bishop Oxford University Press 1996 - 1
This book provides the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts of pattern recognition, the book describes techniques for modelling probability density functions, and discusses the properties and relative merits of the multi-layer perceptron and radial basis function network models. It also motivates the use of various forms of error functions, and reviews the principal algorithms for error function minimization. As well as providing a detailed discussion of learning and generalization in neural networks, the book also covers the important topics of data processing, feature extraction, and prior knowledge. The book concludes with an extensive treatment of Bayesian techniques and their applications to neural networks.
Supervised Sequence Labelling with Recurrent Neural Networks 豆瓣
作者: Graves, Alex Springer 2012 - 2
Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools―robust to input noise and distortion, able to exploit long-range contextual information―that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.
The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional recurrent neural networks extend the framework in a natural way to data with more than one spatio-temporal dimension, such as images and videos. Thirdly, the use of hierarchical subsampling makes it feasible to apply the framework to very large or high resolution sequences, such as raw audio or video.
Experimental validation is provided by state-of-the-art results in speech and handwriting recognition.
Learning Deep Architectures for AI 豆瓣
作者: Yoshua Bengio
Theoretical results suggest that in order to learn the kind of complicated
functions that can represent high-level abstractions (e.g., in
vision, language, and other AI-level tasks), one may need deep architectures.
Deep architectures are composed of multiple levels of non-linear
operations, such as in neural nets with many hidden layers or in complicated
propositional formulae re-using many sub-formulae. Searching
the parameter space of deep architectures is a difficult task, but learning
algorithms such as those for Deep Belief Networks have recently been
proposed to tackle this problem with notable success, beating the stateof-
the-art in certain areas. This monograph discusses the motivations
and principles regarding learning algorithms for deep architectures, in
particular those exploiting as building blocks unsupervised learning of
single-layer models such as Restricted Boltzmann Machines, used to
construct deeper models such as Deep Belief Networks.