自然語言處理
Words, Thoughts, and Theories 豆瓣
作者: Alison Gopnik / Andrew N. Meltzoff The MIT Press 1998 - 7
Words, Thoughts, and Theories articulates and defends the "theory theory" of cognitive and semantic development, the idea that infants and young children, like scientists, learn about the world by forming and revising theories, a view of the origins of knowledge and meaning that has broad implications for cognitive science.Gopnik and Meltzoff interweave philosophical arguments and empirical data from their own and other's research. Both the philosophy and the psychology, the arguments and the data, address the same fundamental epistemological question: How do we come to understand the world around us?Recently, the theory theory has led to much interesting research. However, this is the first book to look at the theory in extensive detail and to systematically contrast it with other theories. It is also the first to apply the theory to infancy and early childhood, to use the theory to provide a framework for understanding semantic development, and to demonstrate that language acquisition influences theory change in children.The authors show that children just beginning to talk are engaged in profound restructurings of several domains of knowledge. These restructurings are similar to theory changes in science, and they influence children's early semantic development, since children's cognitive concerns shape and motivate their use of very early words. But, in addition, children pay attention to the language they hear around them and this too reshapes their cognition, and causes them to reorganize their theories.
Deep Learning: Methods and Applications (Foundations and Trends(r) in Signal Processing) 豆瓣
作者: Li Deng / Dong Yu Now Publishers Inc 2014 - 6
This book is aimed to provide an overview of general deep learning methodology and its applications to a variety of signal and information processing tasks. The application areas are chosen with the following three criteria: 1) expertise or knowledge of the authors; 2) the application areas that have already been transformed by the successful use of deep learning technology, such as speech recognition and computer vision; and 3) the application areas that have the potential to be impacted significantly by deep learning and that have gained concentrated research efforts, including natural language and text processing, information retrieval, and multimodal information processing empowered by multi-task deep learning.
In Chapter 1, we provide the background of deep learning, as intrinsically connected to the use of multiple layers of nonlinear transformations to derive features from the sensory signals such as speech and visual images. In the most recent literature, deep learning is embodied also as representation learning, which involves a hierarchy of features or concepts where higher-level representations of them are defined from lower-level ones and where the same lower-level representations help to define higher-level ones. In Chapter 2, a brief historical account of deep learning is presented. In particular, selected chronological development of speech recognition is used to illustrate the recent impact of deep learning that has become a dominant technology in speech recognition industry within only a few years since the start of a collaboration between academic and industrial researchers in applying deep learning to speech recognition. In Chapter 3, a three-way classification scheme for a large body of work in deep learning is developed. We classify a growing number of deep learning techniques into unsupervised, supervised, and hybrid categories, and present qualitative descriptions and a literature survey for each category. From Chapter 4 to Chapter 6, we discuss in detail three popular deep networks and related learning methods, one in each category. Chapter 4 is devoted to deep autoencoders as a prominent example of the unsupervised deep learning techniques. Chapter 5 gives a major example in the hybrid deep network category, which is the discriminative feed-forward neural network for supervised learning with many layers initialized using layer-by-layer generative, unsupervised pre-training. In Chapter 6, deep stacking networks and several of the variants are discussed in detail, which exemplify the discriminative or supervised deep learning techniques in the three-way categorization scheme.
In Chapters 7-11, we select a set of typical and successful applications of deep learning in diverse areas of signal and information processing and of applied artificial intelligence. In Chapter 7, we review the applications of deep learning to speech and audio processing, with emphasis on speech recognition organized according to several prominent themes. In Chapters 8, we present recent results of applying deep learning to language modeling and natural language processing. Chapter 9 is devoted to selected applications of deep learning to information retrieval including Web search. In Chapter 10, we cover selected applications of deep learning to image object recognition in computer vision. Selected applications of deep learning to multi-modal processing and multi-task learning are reviewed in Chapter 11. Finally, an epilogue is given in Chapter 12 to summarize what we presented in earlier chapters and to discuss future challenges and directions.
Learning to Classify Text Using Support Vector Machines 豆瓣
作者: Thorsten Joachims Springer 2002 - 4
Text Classification, or the task of automatically assigning semantic categories to natural language text, has become one of the key methods for organizing online information. Since hand-coding classification rules is costly or even impractical, most modern approaches employ machine learning techniques to automatically learn text classifiers from examples. However, none of these conventional approaches combines good prediction performance, theoretical understanding, and efficient training algorithms. Based on ideas from Support Vector Machines (SVMs), Learning To Classify Text Using Support Vector Machines presents a new approach to generating text classifiers from examples. The approach combines high performance and efficiency with theoretical understanding and improved robustness. In particular, it is highly effective without greedy heuristic components. The SVM approach is computationally efficient in training and classification, and it comes with a learning theory that can guide real-world applications. Learning To Classify Text Using Support Vector Machines gives a complete and detailed description of the SVM approach to learning text classifiers, including training algorithms, transductive text classification, efficient performance estimation, and a statistical learning model of text classification. In addition, it includes an overview of the field of text classification, making it self-contained even for newcomers to the field. This book gives a concise introduction to SVMs for pattern recognition, and it includes a detailed description of how to formulate text-classification tasks for machine learning. Learning To Classify Text Using Support Vector Machines is designed as a reference for researchers and practitioners, and is suitable as a secondary text for graduate-level students in Computer Science within Machine Learning and Language Technology.
人工智能简史 豆瓣
作者: 尼克 人民邮电出版社 2017
本书全面讲述人工智能的发展史,几乎覆盖人工智能学科的所有领域,包括人工智能的起源。、自动定理证明、专家系统、神经网络、自然语言处理、遗传算法、深度学习、强化学习、超级智能、哲学问题和未来趋势等,以宏阔的视野和生动的语言,对人工智能进行了全面回顾和深度点评。
本书作者和书中诸多人物或为师友或相熟相知,除了详实的考证还有有趣的轶事。本书既适合专业人士了解人工智能鲜为人知的历史,也适合对人工智能感兴趣的大众读者作为入门的向导。