人工神經網絡
Parallel Distributed Processing - 2 Vol. Set 豆瓣
作者: David E. Rumelhart / James L. McClelland The MIT Press 1987 - 7
What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind. The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network. Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought. David E. Rumelhart is Professor of Psychology at the University of California, San Diego. James L. McClelland is Professor of Psychology at Carnegie-Mellon University. A Bradford Book.
The Computational Brain 豆瓣
作者: Patricia Churchland / Terrence J. Sejnowski The MIT Press 1992 - 6
How do groups of neurons interact to enable the organism to see, decide, and move appropriately? What are the principles whereby networks of neurons represent and compute? These are the central questions probed by The Computational Brain. Churchland and Sejnowski address the foundational ideas of the emerging field of computational neuroscience, examine a diverse range of neural network models, and consider future directions of the field. The Computational Brain is the first unified and broadly accessible book to bring together computational concepts and behavioral data within a neurobiological framework.Computer models constrained by neurobiological data can help reveal how -networks of neurons subserve perception and behavior - bow their physical interactions can yield global results in perception and behavior, and how their physical properties are used to code information and compute solutions. The Computational Brain focuses mainly on three domains: visual perception, learning and memory, and sensorimotor integration. Examples of recent computer models in these domains are discussed in detail, highlighting strengths and weaknesses, and extracting principles applicable to other domains. Churchland and Sejnowski show how both abstract models and neurobiologically realistic models can have useful roles in computational neuroscience, and they predict the coevolution of models and experiments at many levels of organization, from the neuron to the system.The Computational Brain addresses a broad audience: neuroscientists, computer scientists, cognitive scientists, and philosophers. It is written for both the expert and novice. A basic overview of neuroscience and computational theory is provided, followed by a study of some of the most recent and sophisticated modeling work in the context of relevant neurobiological research. Technical terms are clearly explained in the text, and definitions are provided in an extensive glossary. The appendix contains a precis of neurobiological techniques.Patricia S. Churchland is Professor of Philosophy at the University of California, San Diego, Adjunct Professor at the Salk Institute, and a MacArthur Fellow. Terrence J. Sejnowski is Professor of Biology at the University of California, San Diego, Professor at the Salk Institute, where he is Director of the Computational Neurobiology Laboratory, and an Investigator of the Howard Hughes Medical Institute.
Parallel Distributed Processing, Vol. 1 豆瓣
作者: David E. Rumelhart / James L. McClelland A Bradford Book 1987 - 7
What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind. The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network. Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.
Parallel Distributed Processing, Vol. 2 豆瓣
作者: James L. McClelland / David E. Rumelhart The MIT Press 1987 - 7
What makes people smarter than computers? These volumes by a pioneering neurocomputing group suggest that the answer lies in the massively parallel architecture of the human mind. They describe a new theory of cognition called connectionism that is challenging the idea of symbolic computation that has traditionally been at the center of debate in theoretical discussions about the mind. The authors' theory assumes the mind is composed of a great number of elementary units connected in a neural network. Mental processes are interactions between these units which excite and inhibit each other in parallel rather than sequential operations. In this context, knowledge can no longer be thought of as stored in localized structures; instead, it consists of the connections between pairs of units that are distributed throughout the network. Volume 1 lays the foundations of this exciting theory of parallel distributed processing, while Volume 2 applies it to a number of specific issues in cognitive science and neuroscience, with chapters describing models of aspects of perception, memory, language, and thought.
Theoretical Neuroscience 豆瓣
作者: Peter Dayan / Laurence F. Abbott The MIT Press 2005 - 9
Theoretical neuroscience provides a quantitative basis for describing what nervous systems do, determining how they function, and uncovering the general principles by which they operate. This text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision, sensory-motor integration, development, learning, and memory.The book is divided into three parts. Part I discusses the relationship between sensory stimuli and neural responses, focusing on the representation of information by the spiking activity of neurons. Part II discusses the modeling of neurons and neural circuits on the basis of cellular and synaptic biophysics. Part III analyzes the role of plasticity in development and learning. An appendix covers the mathematical methods used, and exercises are available on the book's Web site.
Spiking Neuron Models 豆瓣
作者: Wulfram Gerstner Cambridge University Press 2002 - 8
Neurons in the brain communicate by short electrical pulses, the so-called action potentials or spikes. How can we understand the process of spike generation? How can we understand information transmission by neurons? What happens if thousands of neurons are coupled together in a seemingly random network? How does the network connectivity determine the activity patterns? And, vice versa, how does the spike activity influence the connectivity pattern? These questions are addressed in this 2002 introduction to spiking neurons aimed at those taking courses in computational neuroscience, theoretical biology, biophysics, or neural networks. The approach will suit students of physics, mathematics, or computer science; it will also be useful for biologists who are interested in mathematical modelling. The text is enhanced by many worked examples and illustrations. There are no mathematical prerequisites beyond what the audience would meet as undergraduates: more advanced techniques are introduced in an elementary, concrete fashion when needed.
Automatic Speech Recognition 豆瓣
作者: 俞栋 / 邓力 Springer 2014 - 11
This book provides a comprehensive overview of the recent advancement in the field of automatic speech recognition with a focus on deep learning models including deep neural networks and many of their variants. This is the first automatic speech recognition book dedicated to the deep learning approach. In addition to the rigorous mathematical treatment of the subject, the book also presents insights and theoretical foundation of a series of highly successful deep learning models.
Deep Learning: Methods and Applications (Foundations and Trends(r) in Signal Processing) 豆瓣
作者: Li Deng / Dong Yu Now Publishers Inc 2014 - 6
This book is aimed to provide an overview of general deep learning methodology and its applications to a variety of signal and information processing tasks. The application areas are chosen with the following three criteria: 1) expertise or knowledge of the authors; 2) the application areas that have already been transformed by the successful use of deep learning technology, such as speech recognition and computer vision; and 3) the application areas that have the potential to be impacted significantly by deep learning and that have gained concentrated research efforts, including natural language and text processing, information retrieval, and multimodal information processing empowered by multi-task deep learning.
In Chapter 1, we provide the background of deep learning, as intrinsically connected to the use of multiple layers of nonlinear transformations to derive features from the sensory signals such as speech and visual images. In the most recent literature, deep learning is embodied also as representation learning, which involves a hierarchy of features or concepts where higher-level representations of them are defined from lower-level ones and where the same lower-level representations help to define higher-level ones. In Chapter 2, a brief historical account of deep learning is presented. In particular, selected chronological development of speech recognition is used to illustrate the recent impact of deep learning that has become a dominant technology in speech recognition industry within only a few years since the start of a collaboration between academic and industrial researchers in applying deep learning to speech recognition. In Chapter 3, a three-way classification scheme for a large body of work in deep learning is developed. We classify a growing number of deep learning techniques into unsupervised, supervised, and hybrid categories, and present qualitative descriptions and a literature survey for each category. From Chapter 4 to Chapter 6, we discuss in detail three popular deep networks and related learning methods, one in each category. Chapter 4 is devoted to deep autoencoders as a prominent example of the unsupervised deep learning techniques. Chapter 5 gives a major example in the hybrid deep network category, which is the discriminative feed-forward neural network for supervised learning with many layers initialized using layer-by-layer generative, unsupervised pre-training. In Chapter 6, deep stacking networks and several of the variants are discussed in detail, which exemplify the discriminative or supervised deep learning techniques in the three-way categorization scheme.
In Chapters 7-11, we select a set of typical and successful applications of deep learning in diverse areas of signal and information processing and of applied artificial intelligence. In Chapter 7, we review the applications of deep learning to speech and audio processing, with emphasis on speech recognition organized according to several prominent themes. In Chapters 8, we present recent results of applying deep learning to language modeling and natural language processing. Chapter 9 is devoted to selected applications of deep learning to information retrieval including Web search. In Chapter 10, we cover selected applications of deep learning to image object recognition in computer vision. Selected applications of deep learning to multi-modal processing and multi-task learning are reviewed in Chapter 11. Finally, an epilogue is given in Chapter 12 to summarize what we presented in earlier chapters and to discuss future challenges and directions.
A Universe Of Consciousness 豆瓣
作者: Gerald Edelman / Giulio Tononi Basic Books 2001 - 2
A Nobel Prize-winning scientist and a leading brain researcher show how the brain creates conscious experience In A Universe of Consciousness, Gerald Edelman builds on the radical ideas he introduced in his monumental trilogy-Neural Darwinism, Topobiology, and The Remembered Present-to present for the first time an empirically supported full-scale theory of consciousness. He and the neurobiolgist Giulio Tononi show how they use ingenious technology to detect the most minute brain currents and to identify the specific brain waves that correlate with particular conscious experiences. The results of this pioneering work challenge the conventional wisdom about consciousness.
Neural Networks for Control 豆瓣
作者: Werbos, Paul John; Miller, W. Thomas; Sutton, Richard S. A Bradford Book 1995 - 3
Neural Networks for Control brings together examples of all the most important paradigms for the application of neural networks to robotics and control. Primarily concerned with engineering problems and approaches to their solution through neurocomputing systems, the book is divided into three sections: general principles, motion control, and applications domains (with evaluations of the possible applications by experts in the applications areas.) Special emphasis is placed on designs based on optimization or reinforcement, which will become increasingly important as researchers address more complex engineering challenges or real biological-control problems.A Bradford Book. Neural Network Modeling and Connectionism series
Deep Learning with Python 豆瓣
作者: Francois Chollet Manning Publications 2017 - 10
Deep Learning with Python introduces the field of deep learning using the Python language and the powerful Keras library. Written by Keras creator and Google AI researcher François Chollet, this book builds your understanding through intuitive explanations and practical examples. You'll explore challenging concepts and practice with applications in computer vision, natural-language processing, and generative models. By the time you finish, you'll have the knowledge and hands-on skills to apply deep learning in your own projects.