人工智能
The Complexity of Robot Motion Planning 豆瓣
作者: Canny, John F. 1988 - 6
The Complexity of Robot Motion Planning makes original contributions both to robotics and to the analysis of algorithms. In this groundbreaking monograph John Canny resolves long-standing problems concerning the complexity of motion planning and, for the central problem of finding a collision free path for a jointed robot in the presence of obstacles, obtains exponential speedups over existing algorithms by applying high-powered new mathematical techniques.Canny's new algorithm for this "generalized movers' problem," the most-studied and basic robot motion planning problem, has a single exponential running time, and is polynomial for any given robot. The algorithm has an optimal running time exponent and is based on the notion of roadmaps - one-dimensional subsets of the robot's configuration space. In deriving the single exponential bound, Canny introduces and reveals the power of two tools that have not been previously used in geometric algorithms: the generalized (multivariable) resultant for a system of polynomials and Whitney's notion of stratified sets. He has also developed a novel representation of object orientation based on unnormalized quaternions which reduces the complexity of the algorithms and enhances their practical applicability.After dealing with the movers' problem, the book next attacks and derives several lower bounds on extensions of the problem: finding the shortest path among polyhedral obstacles, planning with velocity limits, and compliant motion planning with uncertainty. It introduces a clever technique, "path encoding," that allows a proof of NP-hardness for the first two problems and then shows that the general form of compliant motion planning, a problem that is the focus of a great deal of recent work in robotics, is non-deterministic exponential time hard. Canny proves this result using a highly original construction.John Canny received his doctorate from MIT And is an assistant professor in the Computer Science Division at the University of California, Berkeley. The Complexity of Robot Motion Planning is the winner of the 1987 ACM Doctoral Dissertation Award.
Gaussian Scale-Space Theory 豆瓣
作者: Sporring, Jon; Nielsen, Mads; Florack, L. M. J. Springer 2013 - 10
This book covers Gaussian scale-space theory from its applications to its mathematical foundation. The reader not so familiar with scale-space will find it instructive to first consider some potential applications described in Part I. The next two parts both address fundamental aspects of scale-space. Whereas scale is treated as an essentially arbitrary constant in Part II, Part III emphasises the deep structure, i.e. the structure that is revealed by varying scale. Finally Part IV is devoted to non-linear extensions, notably non-linear diffusion techniques and morphological scale-spaces, and their relation to the linear case. Audience: This volume is addressed to researchers in the field of image analysis seeking mathematical foundation of algorithms.
Scale-Space Theory in Computer Vision 豆瓣
作者: Tony Lindeberg Springer 1993
We perceive objects in the world as having structures at both coarse and fine scales. A tree, for instance, may appear as having a roughly round or cylindrical shape when seen from a distance, even though it is built up from a large number of branches. At a closer look, individual leaves become visible, and we can observe that they in turn have texture at an even finer scale. The fact that objects in the world appear in different ways, depending upon the scale of observation, has important implications when analyzing measured data, such as images, with automatic methods. Scale-Space Theory in Computer Vision describes a formal framework, called scale-space representation, for handling the notion of scale in image data. It gives an introduction to the general foundations of the theory and shows how it applies to essential problems in computer vision such as computation of image features and cues to surface shape. The subjects range from mathematical underpinning to practical computational techniques. The power of the methodology is illustrated by a rich set of examples.
Time Series Analysis by State Space Methods 豆瓣
作者: James Durbin / Siem Jan Koopman Clarendon Press 2001 - 6
This excellent text provides a comprehensive treatment of the state space approach to time series analysis. The distinguishing feature of state space time series models is that observations are regarded as made up of distinct components such as trend, seasonal, regression elements and disturbence terms, each of which is modelled separately. The techniques that emerge from this approach are very flexible and are capable of handling a much wider range of problems than the main analytical system currently in use for time series analysis, the Box-Jenkins ARIMA system. The book provides an excellent source for the development of practical courses on time series analysis.
A Natural History of Human Thinking Goodreads 豆瓣
作者: Michael Tomasello Harvard University Press 2014 - 2
Tool-making or culture, language or religious belief: ever since Darwin, thinkers have struggled to identify what fundamentally differentiates human beings from other animals. In this much-anticipated book, Michael Tomasello weaves his twenty years of comparative studies of humans and great apes into a compelling argument that cooperative social interaction is the key to our cognitive uniqueness. Once our ancestors learned to put their heads together with others to pursue shared goals, humankind was on an evolutionary path all its own.

Tomasello argues that our prehuman ancestors, like today's great apes, were social beings who could solve problems by thinking. But they were almost entirely competitive, aiming only at their individual goals. As ecological changes forced them into more cooperative living arrangements, early humans had to coordinate their actions and communicate their thoughts with collaborative partners. Tomasello's "shared intentionality hypothesis" captures how these more socially complex forms of life led to more conceptually complex forms of thinking. In order to survive, humans had to learn to see the world from multiple social perspectives, to draw socially recursive inferences, and to monitor their own thinking via the normative standards of the group. Even language and culture arose from the preexisting need to work together. What differentiates us most from other great apes, Tomasello proposes, are the new forms of thinking engendered by our new forms of collaborative and communicative interaction.

A Natural History of Human Thinking is the most detailed scientific analysis to date of the connection between human sociality and cognition.
Computational Statistics 豆瓣
作者: Geof H. Givens / Jennifer A. Hoeting Wiley 2012 - 11
Retaining the general organization and style of its predecessor, this new edition continues to serve as a comprehensive guide to modern and classical methods of statistical computing and computational statistics. Approaching the topic in three major parts--optimization, integration, and smoothing--the book includes an overview section in each chapter introduction and step-by-step implementation summaries to accompany the explanations of key methods; expanded coverage of Monte Carlo sampling and MCMC; a chapter on Alternative Viewpoints; a related Web site; new exercises; and more.
The Subjectivity of Scientists and the Bayesian Approach 豆瓣
作者: S. James Press / Judith M. Tanur Wiley-Interscience 2001 - 4
Comparing and contrasting the reality of subjectivity in the work of history's great scientists and the modern Bayesian approach to statistical analysis Scientists and researchers are taught to analyze their data from an objective point of view, allowing the data to speak for themselves rather than assigning them meaning based on expectations or opinions. But scientists have never behaved fully objectively. Throughout history, some of our greatest scientific minds have relied on intuition, hunches, and personal beliefs to make sense of empirical data-and these subjective influences have often aided in humanity's greatest scientific achievements. The authors argue that subjectivity has not only played a significant role in the advancement of science, but that science will advance more rapidly if the modern methods of Bayesian statistical analysis replace some of the classical twentieth-century methods that have traditionally been taught. To accomplish this goal, the authors examine the lives and work of history's great scientists and show that even the most successful have sometimes misrepresented findings or been influenced by their own preconceived notions of religion, metaphysics, and the occult, or the personal beliefs of their mentors. Contrary to popular belief, our greatest scientific thinkers approached their data with a combination of subjectivity and empiricism, and thus informally achieved what is more formally accomplished by the modern Bayesian approach to data analysis. Yet we are still taught that science is purely objective. This innovative book dispels that myth using historical accounts and biographical sketches of more than a dozen great scientists, including Aristotle, Galileo Galilei, Johannes Kepler, William Harvey, Sir Isaac Newton, Antoine Levoisier, Alexander von Humboldt, Michael Faraday, Charles Darwin, Louis Pasteur, Gregor Mendel, Sigmund Freud, Marie Curie, Robert Millikan, Albert Einstein, Sir Cyril Burt, and Margaret Mead. Also included is a detailed treatment of the modern Bayesian approach to data analysis. Up-to-date references to the Bayesian theoretical and applied literature, as well as reference lists of the primary sources of the principal works of all the scientists discussed, round out this comprehensive treatment of the subject. Readers will benefit from this cogent and enlightening view of the history of subjectivity in science and the authors' alternative vision of how the Bayesian approach should be used to further the cause of science and learning well into the twenty-first century.
Fundamentals of Kalman Filtering 豆瓣
作者: Paul Zarchan / Howard Musoff AIAA (American Institute of Aeronautics & Ast 2009 - 9
This is a practical guide to building Kalman filters that shows how the filtering equations can be applied to real-life problems. Numerous examples are presented in detail, showing the many ways in which Kalman filters can be designed. Computer code written in FORTRAN, MATLAB[registered], and True BASIC accompanies all of the examples so that the interested reader can verify concepts and explore issues beyond the scope of the text. In certain instances, the authors intentionally introduce mistakes to the initial filter designs to show the reader what happens when the filter is not working properly. The text carefully sets up a problem before the Kalman filter is actually formulated, to give the reader an intuitive feel for the problem being addressed. Because real problems are seldom presented as differential equations, and usually do not have unique solutions, the authors illustrate several different filtering approaches. Readers will gain experience in software and performance tradeoffs for determining the best filtering approach. The material that has been added to this edition is in response to questions and feedback from readers. The third edition has three new chapters on unusual topics related to Kalman filtering and other filtering techniques based on the method of least squares. Chapter 17 presents a type of filter known as the fixed or finite memory filter, which only remembers a finite number of measurements from the past. Chapter 18 shows how the chain rule from calculus can be used for filter initialization or to avoid filtering altogether. A realistic three-dimensional GPS example is used to illustrate the chain-rule method for filter initialization. Finally, Chapter 19 shows how a bank of linear sine-wave Kalman filters, each one tuned to a different sine-wave frequency, can be used to estimate the actual frequency of noisy sinusoidal measurements and obtain estimates of the states of the sine wave when the measurement noise is low.
Handbook of Latent Semantic Analysis 豆瓣
作者: Thomas K. Landauer / Danielle S. McNamara Lawrence Erlbaum 2007 - 2
The Handbook of Latent Semantic Analysis is the authoritative reference for the theory behind Latent Semantic Analysis (LSA), a burgeoning mathematical method used to analyze how words make meaning, with the desired outcome to program machines to understand human commands via natural language rather than strict programming protocols. The first book of its kind to deliver such a comprehensive analysis, this volume explores every area of the method and combines theoretical implications as well as practical matters of LSA. Readers are introduced to a powerful new way of understanding language phenomena, as well as innovative ways to perform tasks that depend on language or other complex systems. The Handbook clarifies misunderstandings and pre-formed objections to LSA, and provides examples of exciting new educational technologies made possible by LSA and similar techniques. It raises issues in philosophy, artificial intelligence, and linguistics, while describing how LSA has underwritten a range of educational technologies and information systems. Alternate approaches to language understanding are addressed and compared to LSA. This work is essential reading for anyone-newcomers to this area and experts alike-interested in how human language works or interested in computational analysis and uses of text. Educational technologists, cognitive scientists, philosophers, and information technologists in particular will consider this volume especially useful.
Pattern Classification 豆瓣
作者: Richard O. Duda / Peter E. Hart Wiley-Interscience 2000 - 11
The first edition, published in 1973, has become a classic reference in the field. Now with the second edition, readers will find information on key new topics such as neural networks and statistical pattern recognition, the theory of machine learning, and the theory of invariances. Also included are worked examples, comparisons between different methods, extensive graphics, expanded exercises and computer project topics. An Instructor's Manual presenting detailed solutions to all the problems in the book is available from the Wiley editorial department.
Introduction To The Theory Of Neural Computation, Volume I 豆瓣
作者: John A. Hertz Westview Press 1991 - 6
This book comprehensively discusses the neural network models from a statistical mechanics perspective. It starts with one of the most influential developments in the theory of neural networks: Hopfield's analysis of networks with symmetric connections using the spin system approach and using the notion of an energy function from physics. Introduction to the Theory of Neural Computation uses these powerful tools to analyze neural networks as associative memory stores and solvers of optimization problems. A detailed analysis of multi-layer networks and recurrent networks follow. The book ends with chapters on unsupervised learning and a formal treatment of the relationship between statistical mechanics and neural networks. Little information is provided about applications and implementations, and the treatment of the material reflects the background of the authors as physicists. However the book is essential for a solid understanding of the computational potential of neural networks. Introduction to the Theory of Neural Computation assumes that the reader is familiar with undergraduate level mathematics, but does not have any background in physics. All of the necessary tools are introduced in the book.
Learning with Kernels 豆瓣
作者: Bernhard Schlkopf / Alexander J. Smola The MIT Press 2001
In the 1990s, a new type of learning algorithm was developed, based on results from statistical learning theory: the Support Vector Machine (SVM). This gave rise to a new class of theoretically elegant learning machines that use a central concept of SVMs -- -kernels--for a number of learning tasks. Kernel machines provide a modular framework that can be adapted to different tasks and domains by the choice of the kernel function and the base algorithm. They are replacing neural networks in a variety of fields, including engineering, information retrieval, and bioinformatics.Learning with Kernels provides an introduction to SVMs and related kernel methods. Although the book begins with the basics, it also includes the latest research. It provides all of the concepts necessary to enable a reader equipped with some basic mathematical knowledge to enter the world of machine learning using theoretically well-founded yet easy-to-use kernel algorithms and to understand and apply the powerful algorithms that have been developed over the last few years.
Neural Networks for Pattern Recognition 豆瓣
作者: Christopher M. Bishop Oxford University Press 1996 - 1
This book provides the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts of pattern recognition, the book describes techniques for modelling probability density functions, and discusses the properties and relative merits of the multi-layer perceptron and radial basis function network models. It also motivates the use of various forms of error functions, and reviews the principal algorithms for error function minimization. As well as providing a detailed discussion of learning and generalization in neural networks, the book also covers the important topics of data processing, feature extraction, and prior knowledge. The book concludes with an extensive treatment of Bayesian techniques and their applications to neural networks.
Supervised Sequence Labelling with Recurrent Neural Networks 豆瓣
作者: Graves, Alex Springer 2012 - 2
Supervised sequence labelling is a vital area of machine learning, encompassing tasks such as speech, handwriting and gesture recognition, protein secondary structure prediction and part-of-speech tagging. Recurrent neural networks are powerful sequence learning tools―robust to input noise and distortion, able to exploit long-range contextual information―that would seem ideally suited to such problems. However their role in large-scale sequence labelling systems has so far been auxiliary.
The goal of this book is a complete framework for classifying and transcribing sequential data with recurrent neural networks only. Three main innovations are introduced in order to realise this goal. Firstly, the connectionist temporal classification output layer allows the framework to be trained with unsegmented target sequences, such as phoneme-level speech transcriptions; this is in contrast to previous connectionist approaches, which were dependent on error-prone prior segmentation. Secondly, multidimensional recurrent neural networks extend the framework in a natural way to data with more than one spatio-temporal dimension, such as images and videos. Thirdly, the use of hierarchical subsampling makes it feasible to apply the framework to very large or high resolution sequences, such as raw audio or video.
Experimental validation is provided by state-of-the-art results in speech and handwriting recognition.