02 - Intelligent Agents Handout

  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View 02 - Intelligent Agents Handout as PDF for free.

More details

  • Words: 853
  • Pages: 4
Objectives Intelligent Agents

You should !

!

Dr. Richard J. Povinelli

 Copyright Richard J. Povinelli

!

Page 1

rev 1.1, 8/28/2003

Agents and environments

 Copyright Richard J. Povinelli

rev 1.1, 8/28/2003

Page 2

Vacuum-cleaner world

Agents include humans, robots, softbots, thermostats, …. The agent function maps from percept histories to actions: !

be able to provide a definition of a rational agent. be able to compare and contrast various agents including reflex, goal-based, and utility-based agents. be able to classify the environment in which a particular agent operates.

Percepts: location and contents, e.g., [A,Dirty] Actions: Left, Right, Suck, NoOp

f : P* -> A

The agent program runs on the physical architecture to produce f  Copyright Richard J. Povinelli

Page 3

rev 1.1, 8/28/2003

A vacuum-cleaner agent Percept sequence [A,Clean] [A,Dirty] [B,Clean] [B,Dirty] [A,Clean], [A,Clean] [A,Clean], [A,Dirty]

 Copyright Richard J. Povinelli

Fixed performance measure evaluates the environment sequence

Action

Right Suck Left Suck Right Suck

! !

!

What is the right function? Can it be implemented in a small agent program?

 Copyright Richard J. Povinelli

one point per square cleaned up in time T? one point per clean square per time step, minus one per move? penalize for > k dirty squares?

A rational agent chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date Rational ≠ omniscient Rational ≠ clairvoyant Rational ≠ successful Rational ⇒ exploration, learning, autonomy

if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left

rev 1.1, 8/28/2003

Page 4

Rationality

function Reflex-Vacuum-Agent([location, status]) returns an action

 Copyright Richard J. Povinelli

rev 1.1, 8/28/2003

Page 5

 Copyright Richard J. Povinelli

rev 1.1, 8/28/2003

rev 1.1, 8/28/2003

Page 6

Page 1

CAT – PEAS for an Automatic Taxi

PEAS To design a rational agent, we must specify the task environment Consider, e.g., the task of designing an automated taxi: Performance measure !

Describe the ! ! ! !

What we measure an agents against.

For an automatic taxi. Work with a partner for 5 minutes.

Environment !

Preformance measures Environment Actuators Sensors

The world in which the agent operates.

Actuators !

How the agent can modify its environment

Sensors !

How the agent can sense its environment.

 Copyright Richard J. Povinelli

Page 7

rev 1.1, 8/28/2003

PEAS for an Automatic Taxi

 Copyright Richard J. Povinelli

Agent functions and programs

Performance measures

An agent is completely specified by the agent function mapping percept sequences to actions In principle, one can supply each possible sequence to see what it does. Obviously, a lookup table would usually be immense. One agent function (or a small equivalence class) is rational Aim: find a way to implement the rational agent function concisely An agent program takes a single percept as input, keeps internal state:

!

Environment !

Actuators !

Sensors

function SKELETON-AGENT(percept) returns action static: memory, the agent’s memory of the world memory ← UPDATE-MEMORY (memory, percept) action ← CHOOSE-BEST-ACTION (memory) memory ← UPDATE-MEMORY (memory, action) return action

!

AIMA Slides © Stuart Russell and Peter Norvig, 1998 Chapter 2  Copyright Richard J. Povinelli

Page 8

rev 1.1, 8/28/2003

Page 9

rev 1.1, 8/28/2003

Agent types

AIMA Slides © Stuart Russell and Peter Norvig, 1998 Chapter 2  Copyright Richard J. Povinelli

rev 1.1, 8/28/2003

Page 10

Simple Reflex Agent

Four basic types in order of increasing generality simple reflex agents reflex agents with state goal-based agents utility-based agents

AIMA Slides © Stuart Russell and Peter Norvig, 1998 Chapter 2  Copyright Richard J. Povinelli

rev 1.1, 8/28/2003

 Copyright Richard J. Povinelli

Page 11

 Copyright Richard J. Povinelli

rev 1.1, 8/28/2003

rev 1.1, 8/28/2003

Page 12

Page 2

Reflex Agents with State

 Copyright Richard J. Povinelli

Goal-based Agents

Page 13

rev 1.1, 8/28/2003

Utility-based Agents

 Copyright Richard J. Povinelli

Page 14

rev 1.1, 8/28/2003

Environment Types Accessible vs. hidden Deterministic vs. stochastic Episodic vs. nonepisodic Static vs. dynamic Discrete vs. continuous

 Copyright Richard J. Povinelli

Page 15

rev 1.1, 8/28/2003

CAT – Environment Types

 Copyright Richard J. Povinelli

Environment Types Example

With a partner fill in the following table. You have 5 minutes Solitaire

Backgammon

Solitaire

Internet shopping

Page 16

rev 1.1, 8/28/2003

Taxi

Backgammon

Internet shopping

Taxi

Observable?? Deterministic??

Accessible?? Deterministic??

Episodic??

Episodic??

Static??

Static?? Discrete??

Discrete??

 Copyright Richard J. Povinelli

rev 1.1, 8/28/2003

 Copyright Richard J. Povinelli

Page 17

 Copyright Richard J. Povinelli

rev 1.1, 8/28/2003

rev 1.1, 8/28/2003

Page 18

Page 3

AlMA code The code for each topic is divided into four directories: ! !

!

!

!

agents: code defining agent types and programs algorithms: code for the methods used by the agent programs environments: code defining environment types, simulations domains: problem types and instances for input to algorithms Often run algorithms on domains rather than agents in environments.

AIMA Slides © Stuart Russell and Peter Norvig, 1998 Chapter 2  Copyright Richard J. Povinelli

rev 1.1, 8/28/2003

 Copyright Richard J. Povinelli

Page 19

rev 1.1, 8/28/2003

Page 4

Related Documents