人工智能
The Nature of Explanation 豆瓣
作者: Kenneth (K. J. W.) Craik Cambridge University Press 1967 - 10
In his brilliant and tragically brief career, Kenneth Craik anticipated certain ideas which since his death in 1945 have found wide acceptance. As one of the first to realise that machines share with the brain certain principles of functioning, Craik was a pioneer in the development of physiological psychology and cybernetics. Craik published only one complete work of any length, this essay on The Nature of Explanation. Here he considers thought as a term for the conscious working of a highly complex machine, viewing the brain as a calculating machine which can model or parallel external events, a process that is the basic feature of thought and explanation. He applies this view to a number of psychological and philosophical problems (such as paradox and illusion) and suggests possible experiments to test his theory. This book is of interest to those concerned with the concepts of brain and mind.
Metaphors of Memory 豆瓣
作者: Douwe Draaisma 译者: Paul Vincent Cambridge University Press 2001 - 1
What is memory? It is at the same time ephemeral, unreliable and essential to everything we do. Without memory we lose our sense of identity, reasoning, even our ability to perform simple physical tasks. Yet it is also elusive and difficult to define, and throughout the ages philosophers and psychologists have used metaphors as a way of understanding it. First published in 2000, this fascinating book takes the reader on a guided tour of these metaphors of memory from ancient times to the present day. Crossing continents and disciplines, it provides a compelling history of ideas about the mind by exploring the way these metaphors have been used - metaphors often derived from the techniques and instruments developed over the years to store information, ranging from wax tablets and books to photography, computers and even the hologram. Accessible and thought-provoking, this book should be read by anyone who is interested in memory and the mind.
The Neurobiology of Learning and Memory 豆瓣
作者: Rudy, Jerry W. 2008 - 1
This title is a full-colour, accessible synthesis of the interdisciplinary field of the neurobiology of learning and memory.It is an accessible textbook for a 'hot' interdisciplinary field. It is extensively illustrated in full-colour.To understand how the brain learns and remembers requires an integration of psychological concepts and behavioral methods with mechanisms of synaptic plasticity systems and systems neuroscience. This new full-colour textbook provides a synthesis of this interdisciplinary field, each chapter making the key concepts transparent and accessible.
Sparse Distributed Memory 豆瓣
作者: Kanerva, Pentti The MIT Press 2003 - 1
Motivated by the remarkable fluidity of memory the way in which items are pulled spontaneously and effortlessly from our memory by vague similarities to what is currently occupying our attention Sparse Distributed Memory presents a mathematically elegant theory of human long term memory.The book, which is self contained, begins with background material from mathematics, computers, and neurophysiology; this is followed by a step by step development of the memory model. The concluding chapter describes an autonomous system that builds from experience an internal model of the world and bases its operation on that internal model. Close attention is paid to the engineering of the memory, including comparisons to ordinary computer memories.Sparse Distributed Memory provides an overall perspective on neural systems. The model it describes can aid in understanding human memory and learning, and a system based on it sheds light on outstanding problems in philosophy and artificial intelligence. Applications of the memory are expected to be found in the creation of adaptive systems for signal processing, speech, vision, motor control, and (in general) robots. Perhaps the most exciting aspect of the memory, in its implications for research in neural networks, is that its realization with neuronlike components resembles the cortex of the cerebellum.Pentti Kanerva is a scientist at the Research Institute for Advanced Computer Science at the NASA Ames Research Center and a visiting scholar at the Stanford Center for the Study of Language and Information. A Bradford Book.
错觉 豆瓣
The AI Delusion
作者: [美]加里·史密斯 译者: 钟欣奕 中信出版社 2019 - 11
在人工智能异常火热的今天,很多人认为我们生活在一个不可思议的历史时期,人工智能和大数据可能比工业革命更能改变人的一生。然而这种说法未免言过其实,我们的生活确实可能有所改变,但并非一定是朝好的方面发展。我们过于武断地认为计算机搜索和处理堆积如山的数据时不会出差错,但计算机只是擅长收集、储存和搜索数据,它们没有常识或智慧,不知道数字和词语的意思,无法评估数据库中内容的相关性和有效性,它们没有区分真数据、假数据和坏数据所需的人类判断力,没有分辨有理有据和虚假伪造的统计学模型所需的人类智能。
计算机挖掘大数据风行一时,但数据挖掘是人为而非智能,也是非常艰巨、危险的人工智能形式。数据挖掘先是通过大量的数据走势、相关关系来发现让我们内心愉悦却无实践价值的模型,然后创造理论来解释这些模型。作者通过“史密斯测试”和“得州神枪手谬误”等实例说明,如果你挖掘和拷问数据的时间够长、数量够大,你总能得到自己想要的结果,然而这是相关关系却并不是因果关系,只是自我选择偏好,并没有理论基础也没有实用价值。
在人工智能时代,我们对计算机的热爱不应该掩盖我们对其局限性的思考,真正的危险不是计算机比我们更聪明,而是我们认为计算机具有人类的智慧和常识,数据挖掘就是“知识发现”,从而信任计算机为我们做出重要决定。更多的计算能力和更多的数据并不意味着更多的智能,我们需要对人类的智慧有更多的信心。
Principles of Synthetic Intelligence PSI 豆瓣
作者: Bach, Joscha 2009 - 4
Although computational models of cognition have become very popular, these models are relatively limited in their coverage of cognition-- they usually only emphasize problem solving and reasoning, or treat perception and motivation as isolated modules. The first architecture to cover cognition more broadly is Psi theory, developed by Dietrich Dorner. By integrating motivation and emotion with perception and reasoning, and including grounded neuro-symbolic representations, Psi contributes significantly to an integrated understanding of the mind. It provides a conceptual framework that highlights the relationships between perception and memory, language and mental representation, reasoning and motivation, emotion and cognition, autonomy and social behavior. It is, however, unfortunate that Psi's origin in psychology, its methodology, and its lack of documentation have limited its impact. The proposed book adapts Psi theory to cognitive science and artificial intelligence, by elucidating both its theoretical and technical frameworks, and clarifying its contribution to how we have come to understand cognition.
Competing in the Age of AI 豆瓣
作者: Marco Iansiti / Karim R. Lakhani Harvard Business Review Press 2020 - 1
In industry after industry, data, analytics, and AI-driven processes are transforming the nature of work. While we often still treat AI as the domain of a specific skill, business function, or sector, we have entered a new era in which AI is challenging the very concept of the firm. AI-centric organizations exhibit a new operating architecture, redefining how they create, capture, share, and deliver value.
Marco Iansiti and Karim R. Lakhani show how reinventing the firm around data, analytics, and AI removes traditional constraints on scale, scope, and learning that have constrained business growth for hundreds of years. From Airbnb to Ant Financial, Microsoft to Amazon, research shows how AI-driven processes are vastly more scalable than traditional processes, drive massive scope increase, enabling companies to straddle industry boundaries, and enable powerful opportunities for learning--to drive ever more accurate, complex, and sophisticated predictions.
When traditional operating constraints are removed, strategy becomes a whole new game, one whose rules and likely outcomes this book will make clear. Iansiti and Lakhani:
Present a framework for rethinking business and operating models
Explain how "collisions" between AI-driven/digital and traditional/analog firms are reshaping competition and altering the structure of our economy
Show how these collisions force traditional companies to change their operating models to drive scale, scope, and learning
Explain the risks involved in operating model transformation and how to overcome them
Describe the new challenges and responsibilities for the leaders of these firms
Packed with examples--including the most powerful and innovative global, AI-driven competitors--and based on research in hundreds of firms across many sectors, this is the essential guide for rethinking how your firm competes and operates in the era of AI.
A Thousand Brains 豆瓣
作者: Jeff Hawkins Basic Books 2021 - 3
For all of neuroscience's advances, we've made little progress on its biggest question: How do simple cells in the brain create intelligence?
Jeff Hawkins and his team discovered that the brain uses maplike structures to build a model of the world-not just one model, but hundreds of thousands of models of everything we know. This discovery allows Hawkins to answer important questions about how we perceive the world, why we have a sense of self, and the origin of high-level thought.
A Thousand Brains heralds a revolution in the understanding of intelligence. It is a big-think book, in every sense of the word.
The Great Legal Reformation 豆瓣
作者: Mitchell Kowalski iUniverse 2017 - 9
It’s refreshing that this book does not simply look to advances in technology and artificial intelligence as the cause or the future of the Great Legal Reformation. Through in-depth case studies and vignettes, Mitch Kowalski takes us on a tour to meet some of the trailblazers breaking the legal service provider mould, allowing us to eavesdrop on his conversations with them. This is not a glimpse into the future of how he and others might see the legal world developing as the Great Legal Reformation unfolds. This is insight into the here and now - into what these innovators have already envisioned and achieved. These are the platforms from which yet further innovation and re-formation of the market will be driven. From the power and opportunity of regulatory change to enable structural change, access to capital and the participation of people who happen not to be lawyers; through the need to focus on efficiency, continuous improvement, process and project management; to the enduring value of vision, culture, values, leadership, energy and employee engagement, these studies and conversations inform, reveal and challenge. They do not present the new world through rose-tinted glasses or deny the existence of risk: the story of Slater & Gordon’s mixed fortunes is testament to that. But they do show a different way of thinking and acting. Whether lawyers like it or not, these are initiatives that buyers of legal services welcome. —Stephen Mayson ,strategic advisor to law departments, legal services providers and regulators “This is an indispensable handbook for any aspiring legal innovator—a well-researched, accessible, and fascinating collection of dispatches from the cutting edge of legal business.” —Professor Richard Susskind OBE, author of Tomorrow’s Lawyers “Mitch Kowalski … shows us what the new professional world actually does look like. He takes us on a tour of Great Britain, Australia, and the United States, and introduces us to lawyers in big firms and small, serving clients both private and public. The picture that emerges is of a new breed of legal service provider that embraces entrepreneurship, teamwork and technology in a way that seems both unfamiliar and obvious to all lawyers.” —Dr Ian Holloway PC QC, ,Professor and Dean of Law, The University of Calgary “This book will either give you hope or a much needed kick in the pants. Either way it's a win-win.” —Stephen Allen, , legal innovator, Hogan Lovells “Mitch Kowalski does it again. Diving deep inside some of the world’s most innovative legal providers Mitch discovers the future of law in the present. A must read for anyone involved in the legal profession.” —John Chisholm, leading Australian legal commentator and advisor
What Makes Us Smart 豆瓣
作者: Samuel Gershman Princeton University Press 2021
At the heart of human intelligence rests a fundamental puzzle: How are we incredibly smart and stupid at the same time? No existing machine can match the power and flexibility of human perception, language, and reasoning. Yet, we routinely commit errors that reveal the failures of our thought processes. What Makes Us Smart makes sense of this paradox by arguing that our cognitive errors are not haphazard. Rather, they are the inevitable consequences of a brain optimized for efficient inference and decision making within the constraints of time, energy, and memory—in other words, data and resource limitations. Framing human intelligence in terms of these constraints, Samuel Gershman shows how a deeper computational logic underpins the “stupid” errors of human cognition.
Embarking on a journey across psychology, neuroscience, computer science, linguistics, and economics, Gershman presents unifying principles that govern human intelligence. First, inductive bias: any system that makes inferences based on limited data must constrain its hypotheses in some way before observing data. Second, approximation bias: any system that makes inferences and decisions with limited resources must make approximations. Applying these principles to a range of computational errors made by humans, Gershman demonstrates that intelligent systems designed to meet these constraints yield characteristically human errors.
Examining how humans make intelligent and maladaptive decisions, What Makes Us Smart delves into the successes and failures of cognition.
Graph Representation Learning 豆瓣
作者: William L. Hamilton Morgan & Claypool 2020 - 9
Graph-structured data is ubiquitous throughout the natural and social sciences, from telecommunication networks to quantum chemistry. Building relational inductive biases into deep learning architectures is crucial for creating systems that can learn, reason, and generalize from this kind of data. Recent years have seen a surge in research on graph representation learning, including techniques for deep graph embeddings, generalizations of convolutional neural networks to graph-structured data, and neural message-passing approaches inspired by belief propagation. These advances in graph representation learning have led to new state-of-the-art results in numerous domains, including chemical synthesis, 3D vision, recommender systems, question answering, and social network analysis.
This book provides a synthesis and overview of graph representation learning. It begins with a discussion of the goals of graph representation learning as well as key methodological foundations in graph theory and network analysis. Following this, the book introduces and reviews methods for learning node embeddings, including random-walk-based methods and applications to knowledge graphs. It then provides a technical synthesis and introduction to the highly successful graph neural network (GNN) formalism, which has become a dominant and fast-growing paradigm for deep learning with graph data. The book concludes with a synthesis of recent advancements in deep generative models for graphs--a nascent but quickly growing subset of graph representation learning.
Cambrian Intelligence: The Early History of the New AI Goodreads 豆瓣
作者: Rodney A. Brooks The MIT Press 1999 - 7 其它标题: Cambrian Intelligence
Until the mid-1980s, AI researchers assumed that an intelligent system doing high-level reasoning was necessary for the coupling of perception and action. In this traditional model, cognition mediates between perception and plans of action. Realizing that this core AI, as it was known, was illusory, Rodney A. Brooks turned the field of AI on its head by introducing the behavior-based approach to robotics. The cornerstone of behavior-based robotics is the realization that the coupling of perception and action gives rise to all the power of intelligence and that cognition is only in the eye of an observer. Behavior-based robotics has been the basis of successful applications in entertainment, service industries, agriculture, mining, and the home. It has given rise to both autonomous mobile robots and more recent humanoid robots such as Brooks' Cog. This book represents Brooks' initial formulation of and contributions to the development of the behavior-based approach to robotics. It presents all of the key philosophical and technical ideas that put this bottom-up approach at the forefront of current research in not only AI but all of cognitive science.
Vehicles 豆瓣
作者: Valentino Braitenberg A Bradford Book 1986 - 2
These imaginative thought experiments are the inventions of one of the world's eminent brain researchers. They are "vehicles," a series of hypothetical, self-operating machines that exhibit increasingly intricate if not always successful or civilized "behavior." Each of the vehicles in the series incorporates the essential features of all the earlier models and along the way they come to embody aggression, love, logic, manifestations of foresight, concept formation, creative thinking, personality, and free will. In a section of extensive biological notes, Braitenberg locates many elements of his fantasy in current brain research.Valentino Braitenberg is a director of the Max Planck Institute of Biological Cybernetics and Honorary Professor of Information Science at the University of TA1/4bingen, West Germany. A Bradford Book.