機器學習
The Transfer of Cognitive Skill 豆瓣
作者: Mark Singley / John R. Anderson Harvard University Press 1989 - 1
Does a knowledge of Latin facilitate he learning of computer programming? Does skill in geometry make it easier to learn music? The issue of the transfer of learning from one domain to another is a classic problem in psychology as well as an educational question of great importance, which this ingenious new book sets out to solve through a theory of transfer based on a comprehensive theory of skill acquisition. The question was first studies systematically at the turn of the century by the noted psychologist Edward L. Thorndike, who proposed a theory of transfer based on common elements in two different tasks. Since then, psychologists of different theoretical orientations--verbal learning, gestalt, and information processing--have addressed the transfer question with differing and inconclusive results. Singley and Anderson resurrect Thorndike's theory of identical elements, but in a broader context and from the perspective of cognitive psychology. Making use o a powerful knowledge-representation language, they recast his elements into units of procedural and declarative knowledge in the ACT* theory of skill acquisition. One skill will transfer to another, they argue, to the extent that it involves the same productions or the same declarative precursors. They show that with production rules, ransfer can be localized to specific components--in keeping with Thorndike's theory--and yet still be abstract and mentalistic. The findings of this book have important implications for psychology and the improvement of teaching. They will interest cognitive scientists and educational psychologists, as well as computer scientists interested in artificial intelligence and cognitive modeling.
Understanding Machine Learning 豆瓣
作者: Shai Shalev-Shwartz / Shai Ben-David Cambridge University Press 2014
Machine learning is one of the fastest growing areas of computer science, with far-reaching applications. The aim of this textbook is to introduce machine learning, and the algorithmic paradigms it offers, in a principled way. The book provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations that transform these principles into practical algorithms. Following a presentation of the basics of the field, the book covers a wide array of central topics that have not been addressed by previous textbooks. These include a discussion of the computational complexity of learning and the concepts of convexity and stability; important algorithmic paradigms including stochastic gradient descent, neural networks, and structured output learning; and emerging theoretical concepts such as the PAC-Bayes approach and compression-based bounds. Designed for an advanced undergraduate or beginning graduate course, the text makes the fundamentals and algorithms of machine learning accessible to students and non-expert readers in statistics, computer science, mathematics, and engineering.
Machine Learning in Non-Stationary Environments 豆瓣
作者: Sugiyama, Masashi; Kawanabe, Motoaki; 2012 - 4
As the power of computing has grown over the past few decades, the field of machine learning has advanced rapidly in both theory and practice. Machine learning methods are usually based on the assumption that the data generation mechanism does not change over time. Yet real-world applications of machine learning, including image recognition, natural language processing, speech recognition, robot control, and bioinformatics, often violate this common assumption. Dealing with non-stationarity is one of modern machine learning's greatest challenges. This book focuses on a specific non-stationary environment known as covariate shift, in which the distributions of inputs (queries) change but the conditional distribution of outputs (answers) is unchanged, and presents machine learning theory, algorithms, and applications to overcome this variety of non-stationarity. After reviewing the state-of-the-art research in the field, the authors discuss topics that include learning under covariate shift, model selection, importance estimation, and active learning. They describe such real world applications of covariate shift adaption as brain-computer interface, speaker identification, and age prediction from facial images. With this book, they aim to encourage future research in machine learning, statistics, and engineering that strives to create truly autonomous learning machines able to learn under non-stationarity.
Tensorflow:实战Google深度学习框架 豆瓣
作者: 郑泽宇 / 顾思宇 电子工业出版社 2017 - 2
TensorFlow是谷歌2015年开源的主流深度学习框架,目前已在谷歌、优步(Uber)、京东、小米等科技公司广泛应用。《Tensorflow实战》为使用TensorFlow深度学习框架的入门参考书,旨在帮助读者以最快、最有效的方式上手TensorFlow和深度学习。书中省略了深度学习繁琐的数学模型推导,从实际应用问题出发,通过具体的TensorFlow样例程序介绍如何使用深度学习解决这些问题。《Tensorflow实战》包含了深度学习的入门知识和大量实践经验,是走进这个最新、最火的人工智能领域的首选参考书。
Estimation of Dependences Based on Empirical Data 豆瓣
作者: Vladimir Vapnik 译者: Kotz, S. Springer 2006 - 3
In 1982, Springer published the English translation of the Russian book Estimation of Dependencies Based on Empirical Data which became the foundation of the statistical theory of learning and generalization (the VC theory). A number of new principles and new technologies of learning, including SVM technology, have been developed based on this theory. The second edition of this book contains two parts: - A reprint of the first edition which provides the classical foundation of Statistical Learning Theory - Four new chapters describing the latest ideas in the development of statistical inference methods. They form the second part of the book entitled Empirical Inference Science The second part of the book discusses along with new models of inference the general philosophical principles of making inferences from observations. It includes new paradigms of inference that use non-inductive methods appropriate for a complex world, in contrast to inductive methods of inference developed in the classical philosophy of science for a simple world. The two parts of the book cover a wide spectrum of ideas related to the essence of intelligence: from the rigorous statistical foundation of learning models to broad philosophical imperatives for generalization. The book is intended for researchers who deal with a variety of problems in empirical inference: statisticians, mathematicians, physicists, computer scientists, and philosophers.
Information Science 豆瓣
作者: David G. Luenberger Princeton University Press 2006 - 3
From cell phones to Web portals, advances in information and communications technology have thrust society into an information age that is far-reaching, fast-moving, increasingly complex, and yet essential to modern life. Now, renowned scholar and author David Luenberger has produced Information Science, a text that distills and explains the most important concepts and insights at the core of this ongoing revolution. The book represents the material used in a widely acclaimed course offered at Stanford University. Drawing concepts from each of the constituent subfields that collectively comprise information science, Luenberger builds his book around the five "E's" of information: Entropy, Economics, Encryption, Extraction, and Emission. Each area directly impacts modern information products, services, and technology--everything from word processors to digital cash, database systems to decision making, marketing strategy to spread spectrum communication. To study these principles is to learn how English text, music, and pictures can be compressed, how it is possible to construct a digital signature that cannot simply be copied, how beautiful photographs can be sent from distant planets with a tiny battery, how communication networks expand, and how producers of information products can make a profit under difficult market conditions. The book contains vivid examples, illustrations, exercises, and points of historic interest, all of which bring to life the analytic methods presented. It presents a unified approach to the field of information science. It emphasizes basic principles, and includes a wide range of examples and applications. It helps students develop important new skills, and suggests exercises with solutions in an instructor's manual.
Machine Learning 豆瓣 Goodreads
9.0 (6 个评分) 作者: Kevin P·Murphy The MIT Press 2012 - 9
Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, a unified, probabilistic approach. The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package--PMTK (probabilistic modeling toolkit)--that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
Prediction, Learning, and Games 豆瓣
作者: Nicolo Cesa-Bianchi / Gabor Lugosi Cambridge University Press 2006 - 3
This important new text and reference for researchers and students in machine learning, game theory, statistics and information theory offers the first comprehensive treatment of the problem of predicting individual sequences. Unlike standard statistical approaches to forecasting, prediction of individual sequences does not impose any probabilistic assumption on the data-generating mechanism. Yet, prediction algorithms can be constructed that work well for all possible sequences, in the sense that their performance is always nearly as good as the best forecasting strategy in a given reference class. The central theme is the model of prediction using expert advice, a general framework within which many related problems can be cast and discussed. Repeated game playing, adaptive data compression, sequential investment in the stock market, sequential pattern analysis, and several other problems are viewed as instances of the experts' framework and analyzed from a common nonstochastic standpoint that often reveals new and intriguing connections. Old and new forecasting methods are described in a mathematically precise way in order to characterize their theoretical limitations and possibilities.
Machine Learning in Asset Pricing 豆瓣
作者: Stefan Nagel Princeton University Press 2021 - 5
A groundbreaking, authoritative introduction to how machine learning can be applied to asset pricing.
Investors in financial markets are faced with an abundance of potentially value-relevant information from a wide variety of different sources. In such data-rich, high-dimensional environments, techniques from the rapidly advancing field of machine learning (ML) are well-suited for solving prediction problems. Accordingly, ML methods are quickly becoming part of the toolkit in asset pricing research and quantitative investing. In this book, Stefan Nagel examines the promises and challenges of ML applications in asset pricing.
Asset pricing problems are substantially different from the settings for which ML tools were developed originally. To realize the potential of ML methods, they must be adapted for the specific conditions in asset pricing applications. Economic considerations, such as portfolio optimization, absence of near arbitrage, and investor learning can guide the selection and modification of ML tools. Beginning with a brief survey of basic supervised ML methods, Nagel then discusses the application of these techniques in empirical research in asset pricing and shows how they promise to advance the theoretical modeling of financial markets.
Machine Learning in Asset Pricing presents the exciting possibilities of using cutting-edge methods in research on financial asset valuation.
Graph Representation Learning 豆瓣
作者: William L. Hamilton Morgan & Claypool 2020 - 9
Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis.
This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs--a nascent but quickly growing subset of graph representation learning.