Agent System

  • Uploaded by: ghazwan
  • 0
  • 0
  • July 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Agent System as PDF for free.

More details

  • Words: 3,909
  • Pages: 65
Artificial Intelligence By Noman Hasany Assistant Professor Computer Engineering Department SIR SYED UNIVERSITY OF ENGINEERING & TECHNOLOGY, KARACHI

Course Outline • Theory – – – – –

AI (Intro) Searching Intelligent Agents Logic and Inference Knowledge-based Systems – Natural Language Processing – Neural Nets

• Application – Prolog – Natural Language Tool kit (NLTK) + Python

AI

Heuristic • Heuristic is a rule or piece of information that is used to make a problem-solving method more effective (to achieve the goal) or more efficient (fast and/or have less space) . • Example: a person who is looking for a suitable gift would go to straight to the shop that he considered to be most likely to have a suitable gift, then to the next most likely and so on. • Humans use such heuristics all the time to solve all kinds of problems  Computers can also use.

Why study AI?

Search engines Science

Medicine/ Diagnosis Labor

Appliances

What else?

AI State of the art • Deep Blue defeated the reigning world chess champion GarryKasparov in 1997 • Automated Reasoning methods proved a mathematical conjecture (Robbins conjecture) unsolved for decades • No hands across America (driving autonomously 98% of the time from Pittsburgh to San Diego) • Military still the strongest factor for AI research: During the 1991 Gulf War, US forces deployed an AI logistics planning and scheduling program that involved up to 50,000 vehicles, cargo, and people • NASA's on-board autonomous planning program controlled the scheduling of operations for a spacecraft • Proverb solves crossword puzzles better than most humans

AI State of the art • Have the following been achieved by AI? – World-class chess playing – Playing table tennis – Cross-country driving – Solving mathematical problems – Engage in a meaningful conversation – Handwriting recognition – Observe and understand human emotions – Express emotions –…

AI (John McCarthy) • It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

AI (Marvin Minsky) • “Artificial Intelligence is the science of making machines do things that require intelligence if done by men."

AI • Views of AI fall into four categories: » Rationally means logically

What is AI? The exciting new effort to make computers thinks … machine with minds, in the full and literal sense” (Haugeland 1985)

“The study of mental faculties through the use of computational models” (Charniak et al. 1985)

“The art of creating machines that perform functions that require intelligence when performed by people” (Kurzweil, 1990)

A field of study that seeks to explain and emulate intelligent behavior in terms of computational processes” (Schalkol, 1990)

Systems that think like humans Systems that act like humans [Above are Emulation] (Imitate)

Systems that think rationally Systems that act rationally [Above are Simulation] (Replicate)

Acting humanly: Turing Test • Turing (1950) "Computing machinery and intelligence": –operational test for intelligent behavior: the Imitation Game –components needed for passing the test: natural language processing, knowledge representation, automated reasoning, machine learning

Acting humanly: Turing Test

Thinking humanly: cognitive modeling • not only the results, but also the reasoning steps made for solving a problem must be similar • requires scientific theories of internal activities of the brain: introspection (the observation of things internal to one's self) and psychological experiments • cognitive science: computer models from AI and experimental techniques from psychology -theories of the workings of the human mind

Thinking rationally: "laws of thought" • Aristotle: what are correct arguments/thought processes? – syllogisms: patterns for argument structures that always yielded correct conclusions when given correct premises

• several Greek schools developed various forms of logic: notation and rules of derivation for thoughts; • direct line through mathematics and philosophy to modern AI • problems: – 1.difficulties in translating “everyday knowledge” into a logical formalism (the presence of uncertainty) – 2.scalability (degree/range) of logic-based reasoning systems

Acting rationally: rational agent • Rational behavior: doing the right thing • The right thing: that which is expected to maximize goal achievement, given the available information • doesn't necessarily involve thinking –but thinking should be in the service of rational action (thought is involve in design phase) • more amenable to scientific development than humanlike behavior or thought approaches

Rational agents • An agent is an entity that perceives and acts • Abstractly, an agent is a function from percept histories to actions: • For any given class of environments and tasks, we seek the agent (or class of agents) with the best performance • Caveat (limitation): computational limitations make perfect rationality unachievable – design best program for given machine resources

Intelligence • Intelligence is the computational part of the ability to achieve goals in the world. • Varying kinds and degrees of intelligence occur in people, many animals and some machines. • Ultimate intelligence is the spiritual part of the ability to achieve goals in this world and for the world hereafter. (rabbana atina fidduniya hasana wafil akhirate hasana…)

Types of AI (MSN Encarta) • Symbolic AI: Symbolic AI is based in logic. It uses sequences of rules to tell the computer what to do next. Expert systems consist of many so-called IF-THEN rules: IF this is the case, THEN do that. Since both sides of the rule can be defined in complex ways, rulebased programs can be very powerful. • Connectionist AI: Connectionism is inspired by the brain. It is closely related to computational neuroscience, which models actual brain cells and neural circuits. Connectionist AI uses artificial neural networks made of many units working in parallel. Each unit is connected to its neighbors by links that can raise or lower the likelihood that the neighbor unit will fire (excitatory and inhibitory connections respectively). Neural networks that are able to learn do so by changing the strengths of these links, depending on past experience. These simple units are much less complex than real neurons. Each can do only one thing: for instance, report a tiny vertical line at a particular place in an image. What matters is not what any individual unit is doing, but the overall activity-pattern of the whole network.

Types of AI (MSN Encarta) • Evolutionary AI: Evolutionary AI draws on biology. Its programs make random changes in their own rules, and select the best daughter programs to breed the next generation. This method develops problem-solving programs, and can evolve the “brains” and “eyes” of robots. It is often used in modeling artificial life (A-Life). A-Life studies self-organization: how order arises from something that is ordered to a lesser degree. Biological examples include the flocking patterns of birds and the development of embryos.

Branches of AI (John McCarthy) • Logical AI: What a program knows about the world in general the facts of the specific situation in which it must act, and its goals are all represented by sentences of some mathematical logical language. The program decides what to do by inferring that certain actions are appropriate for achieving its goals. • Search: AI programs often examine large numbers of possibilities, e.g. moves in a chess game or inferences by a theorem proving program. Discoveries are continually made about how to do this more efficiently in various domains.

Branches of AI (John McCarthy) • Pattern recognition: When a program makes observations of some kind, it is often programmed to compare what it sees with a pattern. For example, a vision program may try to match a pattern of eyes and a nose in a scene in order to find a face, for finding word sense, tag sense in a natural language text, for playing next move in a chess game

Branches of AI (John McCarthy) • Representation: Facts about the world have to be represented in some way. Usually languages of mathematical logic are used. • Inference: From some facts, others can be inferred. Mathematical logical deduction is adequate for some purposes • Common sense knowledge and reasoning: This is the area in which AI is farthest from human-level, in spite of the fact that it has been an active research area since the 1950s. While there has been considerable progress, e.g. in developing systems of non-monotonic reasoning and theories of action, yet more new ideas are needed. The Cyc system contains a large but spotty collection of common sense facts.

Branches of AI (John McCarthy) • Learning from experience: The approaches to AI based on connectionism and neural nets specialize in that. There is also learning of laws expressed in logic. Programs can only learn what facts or behaviors their formalisms can represent, and unfortunately learning systems are almost all based on very limited abilities to represent information. • Planning: Planning programs start with general facts about the world, facts about the particular situation and a statement of a goal. From these, they generate a strategy for achieving the goal. In the most common cases, the strategy is just a sequence of actions. • Epistemology: This is a study of the kinds of knowledge that are required for solving problems in the world.

Branches of AI (John McCarthy) • Ontology: Ontology is the study of the kinds of things that exist. In AI, the programs and sentences deal with various kinds of objects, and we study what these kinds are and what their basic properties are. Emphasis on ontology begins in the 1990s. • Heuristics: A heuristic is a way of trying to discover something or an idea embedded in a program. Heuristic functions are used in some approaches to search to measure how far a node in a search tree seems to be from a goal. Heuristic predicates that compare two nodes in a search tree to see if one is better than the other, i.e. constitutes an advance toward the goal, • Genetic programming: Genetic programming is a technique for getting programs to solve a task by mating (friendly) random Lisp programs and selecting fittest in millions of generations.

Foundations of AI • • • • • • • •

Philosophy Mathematics Economics Neuroscience Psychology Computer engineering Control theory Linguistics

Engineering Intelligent Entities

– The science of understanding intelligent entities by developing theories that attempt to explain and predict the nature of such entities. – The engineering of intelligent entities.

Engineering Intelligent Entities • The engineering of such intelligent entities means that we want to build machines to perform tasks for us. • However, these machines must act in an intelligent way. • Intelligence can be considered to be manifest in a number of different but interconnected forms of behaviour, such as: – – – –

reasoning about beliefs, reasoning about actions, the ability to communicate, the ability to learn, both from mistakes and also through gaining new knowledge, – social awareness.

Agents •

The key notion here is that we want to equip such intelligent systems, known as ‘agents’, with the ability to think and act autonomously.



There is actually no universal agreed upon definition of an agent but a popular definition has been given by Wooldridge and Jennings (1995)…

“An agent is a computer system that is situated in some environment, and that is capable of autonomous action in this environment in order to meet its design objectives.” •

In addition to this, a collection of such agents situated together in an environment and capable of interacting in with one another is known as a ‘multi-agent system’.

What is an (Intelligent) Agent? • An over-used, over-loaded, and misused term. • Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through its effectors to maximize progress towards its goals.

What is an (Intelligent) Agent? • PAGE (Percepts, Actions, Goals, Environment) – Task-specific & specialized: well-defined goals and environment – The notion of an agent is meant to be a tool for analyzing systems, – It is not a different hardware or new programming languages

Intelligent Agents and Artificial Intelligence • Example: Human mind as network of thousands or millions of agents working in parallel. To produce real artificial intelligence, this school holds, we should build computer systems that also contain many agents and systems for arbitrating among the agents' competing results. Agency

• Challenges: – Action selection: What next action to choose – Conflict resolution

effectors

sensors

• Distributed decision-making and control

Agents Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators • Robotic agent: cameras and infrared range finders for sensors; • various motors for actuators

Intelligent Agents •

In order for these entities to be deemed as intelligent, there are a number of capabilities that we would expect such agents to possess, again taken from a list defined by Wooldridge and Jennings (1995)



Reactivity: Intelligent agents are able to perceive their environment, and respond in a timely fashion to changes that occur in it in order to satisfy their design objectives.



Proactiveness: Intelligent agents are able to exhibit goal-directed behaviour by taking the initiative in order to satisfy their design objectives.



Social ability: Intelligent agents are capable of interacting with other agents (and possibly humans) in order to satisfy their design objectives.

Intelligent Agents Environment Action output

Sensor input

A Windshield Wiper Agent How do we design a agent that can wipe the windshields when needed? • • • • • •

Goals? Percepts? Sensors? Effectors? Actions? Environment?

A Windshield Wiper Agent Goals: Keep windshields clean & maintain visibility • Percepts: Raining, Dirty • Sensors: Camera (moist sensor) • Effectors: Wipers (left, right, back) • Actions: Off, Slow, Medium, Fast • Environment: Inner city, freeways, highways, weather …

Interacting Agents Collision Avoidance Agent (CAA) • Goals: Avoid running into obstacles • Percepts ? • Sensors? • Effectors ? • Actions ? • Environment: Freeway

Interacting Agents Lane Keeping Agent (LKA) • Goals: Stay in current lane • Percepts ? • Sensors? • Effectors ? • Actions ? • Environment: Freeway

Interacting Agents Collision Avoidance Agent (CAA) • Goals: Avoid running into obstacles • Percepts: Obstacle distance, velocity, trajectory (route) • Sensors: Vision, proximity (closeness) sensing • Effectors: Steering Wheel, Accelerator, Brakes, Horn, Headlights • Actions: Steer, speed up, brake, blow horn, signal (headlights) • Environment: Freeway

Interacting Agents Lane Keeping Agent (LKA) • Goals: Stay in current lane • Percepts: Lane center, lane boundaries Vision • Sensors: Steering Wheel, • Effectors: Accelerator, Brakes • Actions: Steer, speed up, brake • Environment: Freeway

Structure of Intelligent Agents • Agent = architecture + program • Agent program: the implementation of f : P* → A, the agent’s perception-action mapping function Skeleton-Agent(Percept) returns Action memory ← UpdateMemory(memory, Percept) Action ← ChooseBestAction(memory) memory ← UpdateMemory(memory, Action) return Action • Architecture: a device that can execute the agent program (e.g., general-purpose computer, specialized device, beobot, etc.)

Beobot • What is a Beobot? – It is an autonomous robot that uses off-theshelf PC hardware and software to perform its computation

• It is designed to run in actual outdoor unconstrained environments, which is a far cry from the insulated test labs that robots are mainly tested in.



Using a look-up-table to encode f : P* → A Example: Collision Avoidance

• • •

– Sensors: 3 proximity sensors – Effectors:Steering Wheel, Brakes How to generate? How large? How to select action?

obstacle

sensors agent

Using a look-up-table to encode f : P* → A •

Example: Collision Avoidance – Sensors: 3 proximity sensors – Effectors:Steering Wheel, Brakes

obstacle

sensors

agent



How to generate: for each p ∈ Pl × Pm × Pr generate an appropriate action, a ∈ S × B



How large: size of table = #possible percepts times # possible actions = |Pl | |Pm| |Pr| |S| |B| E.g., P = {close, medium, far}3 A = {left, straight, right} × {on, off} then size of table = 27*3*2 = 162



How to select action? Search.

Information agents • • • •

Manage the explosive growth of information. Manipulate or collate information from many distributed sources. Information agents can be mobile or static.

Examples: – BargainFinder comparison shops among Internet stores for CDs – FIDO the Shopping Doggie (out of service) – Internet Softbot infers which internet facilities (finger, ftp, gopher) to use and when from high-level search requests. • Challenge: ontologies for annotating Web pages (eg, SHOE).

Intelligent Agents • Agents observe the environment in which they are situated and perform actions based on their knowledge of their environment. • However, observing an environment is no easy task for a number of reasons: – the agent needs be equipped with some suitable representation of the environment which it inhabits – the agent needs to be able to reason about the state of the environment – the agent needs to be able to update its beliefs when the environment changes – the agent needs to know how its action will affect the environment it inhabits.

Environments • Russell and Norvig (1995) have given a classification of the different types of environment that an agent can inhabit: • • • • •

Accessible vs inaccessible Deterministic vs non-deterministic Episodic vs non-episodic Static vs dynamic Discrete vs continuous

• We will look at a brief description of each…

Accessible vs. Inaccessible • An accessible environment is one in which the agent can obtain complete, accurate, up-to-date information about the environment’s state. • Most moderately complex environments (including, for example, the everyday physical world and the Internet) are inaccessible. • The more accessible an environment is, the simpler it is to build agents to operate in it.

Deterministic vs. NonDeterministic • Output problem • A deterministic environment is one in which any action has a single guaranteed effect — there is no uncertainty about the state that will result from performing an action. • The physical world can to all intents and purposes be regarded as non-deterministic. • Non-deterministic environments present greater problems for the agent designer.

Episodic vs. Non-Episodic • •

Input Problem In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent in different scenarios.



Episodic environments are simpler from the agent developer’s perspective because the agent can decide what action to perform based only on the current episode — it need not reason about the interactions between this and future episodes.



An example of an episodic environment is a mail sorting system.

Static vs. Dynamic • A static environment is one that can be assumed to remain unchanged except by the performance of actions by the agent. • A dynamic environment is one that has other processes operating on it, and which hence changes in ways beyond the agent’s control. • The physical world is a highly dynamic environment.

Discrete vs. Continuous • An environment is discrete if there are a fixed, finite number of actions and percepts in it. • A chess game is an example of a discrete environment, and taxi driving is an example of a continuous one. • This is not really an intrinsic property of the environment, but more a property of how we choose to model the environment. So, you can think of your perceptions of the world or your actions as being discrete or continuous.

Environments • So, the type of environment that an agent inhabits effects its design and behaviour. • In order for an agent to think rationally it needs to be aware of the environment it inhabits, which requires knowledge representation, and it needs to be able to reason about this knowledge in order to act rationally and fulfil its tasks.

Multi-Agent Systems • However, the situation becomes even more complicated when there are other such agents operating within the same environment. • Now, the agent must be able to not only reason about its own beliefs but also about the beliefs of other agents and • it also needs to reason about the way the environment can be changed by these other agents. • All this requires rational reasoning.

Reasoning about Beliefs • In order to represent agents’ environments in a manner that they can understand we use logic. • We encode information about the environment in the form of logic and this information forms the agent’s knowledge base and thus its beliefs about the world. • In order to enable an agent to reason with this knowledge, the agent needs to have a set of rules that tell it how this environment can change.

Reasoning about Action • The agent’s knowledge about the environment should also tell it how the environment will change when it performs actions. • So the ability to act rationally is intrinsically (built-in) linked with the ability to think rationally. • The agent will have specific goals that it is trying to achieve and these goals are represented as particular states of the environment. • In order to act rationally the agent must choose an action to execute that will achieve this particular goal.

Domain Applications • Agents’ spheres of knowledge will differ depending upon the particular domain they are constructed for use in. • For example…

Example Domains • The following domains are examples of where agent technology is being deployed: – The internet: e.g. softbots (software robots) which visit websites and gather information from them. • e.g. for comparison shopping, or for harvesting information.

Example Domains – Medicine: multi-agent systems can be used for simulation, decision making, communication and risk assessment purposes • one such group working on the application of multiagent systems to medicine is the Advanced Computation Lab at Cancer Research UK

Example Domains – Air traffic control: agents are deployed in systems to assist in the control of air traffic and the provision of information. • example: the OASIS system tested at Sydney airport.

– Spacecraft: agents are currently being tested for use in spacecraft computers • e.g. for autonomous navigation and detection of errors/malfunctions. • e.g. NASA’s Deep Space 1 mission.

Example Domains – Telecommunications systems: e.g. for managing dynamic information concerning the routing of network traffic. – Law: to support decision making and information management involved in legal reasoning. – These are just a few domains that are currently making use of agent technology and KR&R methods. There are many more application domains.

Agents and KR&R • All the previous application domains for agent systems each have their own aims with respect to the problems that they address. • So, the knowledge and reasoning embodied will differ depending upon the domain. • But, what is common to all the application areas that make use of AI and agent technology is that they need systems that possess the relevant knowledge of the domain and environment and they also need to be able to reason effectively with this information in order to fulfil their specific tasks.

Agents and KR&R • Expressive logics, such as modal and description logics, are being used as an efficient and effective way of dealing with KR&R (Knowledge representation and reasoning) in intelligent systems, as we will come to see during the course. • Further detailed information about agent technology and applications can be found in: M. J. Wooldridge An Introduction to Multi-Agent Systems John Wiley and Sons Publishers

Summary • Overview of: – AI – Intelligent agents – Environments – Reasoning about beliefs and actions – Domain applications

Related Documents


More Documents from ""

Agent System
July 2020 8