Updates
- New versions and new lectures up (all the way to lecture 4, on planning)
- New versions of lectures 01, 02 are up
- New website up before the semester starts.
Overview
There are many courses that cover specific areas within artificial intelligence, such as natural language processing, knowledge representation, planning, computer vision, machine learning, knowledge-based systems, decision-making, multi-agent systems, etc. This course focuses instead on integrating artificial intelligence capabilities into a single agent. It seeks to provide an understanding of how to build a synthetic mind, out of the various components already familiar to AI researchers.
Readings
See here for a complete list.
Lecture Notes
(incrementally available below)
Lecture 1-2: Intelligence, Knowledge, and the Agent
- Current fault-lines in AI research run along capabilities (components of intelligence), instead of along types of environments (hypothesis: the specifics of agent architecture and components will differ between environments).
- Allen Newell’s Knowledge-Level description of systems distinguishes knowledge from intelligence. Intelligence approximates knowledge.
- Reading: Allen Newell’s [The Knowledge Level](readings/knowledge-level/The knowledge level-aij.pdf).
- Optional: Tom Dietterich’s follow up to Newell’s paper discusses [Knowledge Level Analysis of Learning Programs](readings/knowledge-level/The knowledge level kll-tr.pdf)
- The complete agent has action-selection and perception processes.
- The simplest action-selection loop we could imagine
- Optional reading: Allen Newell’s 20-questions speech (“You cannot play 20 questions with nature and win”) is a call to arms in psychology and cognitive-modeling, to conduct research on complete models, rather than micro-theories (here’s a summary). This call applies just as much to AI research, and to the purpose of this course: to motivate research into complete agents and their structure, rather than specific components.
Lecture 3: Perception and Simple Knowledge Representation
- Perception (Lecture)
- Knowledge representation and reasoning (KR) is a deep and wide field in AI
- fluents are a very basic representation
- grounded fluent literals (fluents with assigned values, no variables) are a basic unit in theory.
- A good start, for deterministic environments
- Combinations of fluents can be used to represent states, through factored state representations
- Perception process yields collection of fluents
- There are different ways to organize perception processes with respect to other cognition processes
Lectures 4: Planning (in Deterministic Environments)
- Planning (Lecture)
- Random selection of actions can be made a bit better by using a bit of knowledge: model of actions.
- Basic action model: when it is applicable, and what effects it has
- Using the same knowledge, we can build a more intelligent agent: uses the same knowledge in a more goal-oriented fashion
- Use a planner to find a sequence of actions to lead towards the goal.
- There is an entire subfield of AI devoted to investigating AI planning (and scheduling).
- A great resource for learning more about planning: the book by Ghallab, Nau, & Traverso, called Automated Planning and Acting, published by Cambridge University Press, 2016.
- PDDL (Planning Domain Description Language) is a standard language used to define domains and actions in the domains, typically for use by planners to solve planning problems (also defined in PDDL) in the domains. We are using it in more general setting, to define also execution problems, which generalize over planning problems.
- Prof. Dana Nau has a good presentation introducing the representation of planning problems.
- Planning is most often carried out by a search algorithm. The algorithm may search through the space of possible states, of it may search through the space of possible plans. Two presentations by Prof. Gerhard Wickler explain state-space planning vs plan-space planning. Dana Nau also explains plan-space planning.
- The result of planning may be a totally-ordered or a partially-ordered plan, or a policy.
Lecture 5: Behavior Arbitration (I): Selection
- A different view on representing plans for execution: behaviors
- Task: arbitrate between behviors (first part of lecture: Introduction to Behavior-Arbitration)
- One approach: select one behavior to take over control, via Deterministic Behavior Selection
- or Non-Determinisic and Hierarchical Behavior Selection (Recipes)
- We distinguish hierarchical from layered models.
- Behavior arbitration may also take place via local, distributed methods, e.g., using activation functions
Lectures 6-7: Integrating Machine Learning
- Most machine learning is studied in isolation of agents
- Where does machine learning fit in the intelligent agent
- Part I
- Part II
Lecture 8: Hybrid Architectures
- Realtime Control System (RCS)
- Three-Layer Architectures. A nice article surveying the history of these architectures and reflects on why they work well was written by Erann Gat in 1998.
Lecture 9: Behavior Arbitration (II): Fusion
- Another approach to arbitration of behaviors: fuse information and decision from multiple behaviors. Compromise!
- Looked at potential fields, and Payton-Rosenblatt architecture (DAMN))