1 of 38

Artificial Intelligence (AI)�

Chapter Two

Intelligent Agent

2 of 38

Content

    • Definition of Agents
    • Properties of Intelligent agents
    • Agents and Environments
    • Rationality Vs Omniscience
    • Structure of Intelligent Agents
    • Agent Types

3 of 38

Intelligent Agents

  • An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators

3

4 of 38

Types of Intelligent Agents

  • Software agents:
    • Also called a softbot (software robot)
    • It is an agent that interacts with a software environment by issuing commands and interpreting the environments feedback.
    • E.g. Mail handling agent, information filtering agent

  • Physical agents
    • Are robots that operates in the physical world and
    • Can perceive and manipulate objects in that world

4

5 of 38

Cont.

  • Require more flexible interaction with the environment, the ability to modify one’s goals, knowledge that be applied flexibly to different situations

6 of 38

Cont.

6

Human beings

Agents

Sensors

Eyes, Ears, Nose

Cameras, Scanners, Mic, infrared range finders

Effectors

Hands, Legs, Mouth

Various Motors (artificial hand, artificial leg), Speakers, Radio

7 of 38

Examples of agents in different types of applications

7

Agent type

Percepts

Actions

Goals

Environment

Medical diagnosis system

Symptoms, patient's answers

Questions, tests, treatments

Healthy patients, minimize costs

Patient, hospital

Interactive English tutor

Typed words, questions, suggestions

Write exercises, suggestions, corrections

Maximize student's score on exams

Set of students, materials

Part-picking robot

Pixels of varying intensity

Pick up parts and sort into bins

Place parts in correct bins

Conveyor belts with parts

Satellite image analysis system  

Pixels intensity, color

Print a categorization of scene

Correct categorization

Images from orbiting satellite

 

 

 

 

 

Refinery controller

Temperature, pressure readings

Open, close valves; adjust temperature

Maximize purity, yield, safety

Refinery

8 of 38

Rationality vs. Omniscience

  • Rational agent: Tries to get the best possible outcome given limited knowledge
    • Is expected to maximize goal achievement, given the available information
  • An Omniscient agent: knows the actual outcome of its actions, and acts accordingly,
    • Knows exactly what will happen for all its possible action but in reality omniscience is impossible.

  • Rational agents take action with expected success, where as omniscient agent take action with 100% sure of its success

8

9 of 38

Example

  • While you are walking along the road, You see an old friend across the street. There is no traffic. So, being rational, you start to cross the street.

  • Meanwhile a big banner falls off from above and before you finish crossing the road, you are flattened.
  • Were you irrational to cross the street?

  • This points out that rationality is concerned with expected success, given what has been perceived.
    • Crossing the street was rational, because most of the time, the crossing would be successful, and there was no way you could have foreseen the falling banner.
    • So we can not blame an agent for failing to take into account something it could not perceive. Or for failing to take an action that it is incapable of taking.

9

10 of 38

Rational agent

  • In summary what is rational at any given point depends on four things.

    • Perception: Everything that the agent has perceived so far concerning the current scenario in the environment
    • Knowledge: What an agent already knows about the environment
    • Action: The actions that the agent can perform back to the environment
    • Performance measure: The performance measure that defines degrees of success of the agent

10

11 of 38

Performance measure

  • How do we decide whether an agent is successful or not?

    • Establish a standard of what it means to be successful in an environment and use it to measure the performance
    • A rational agent should do whatever action is expected to maximize its performance measure

  • What about “Chess Playing”?

11

12 of 38

Designing an agent

  • A physical agent has two parts: architecture + program

  • Architecture
    • Runs the programs
    • Makes the percept from the sensors available to the programs
    • Feeds the program’s action choices to the effectors

  • Programs
    • Accepts percept from an environment and generates actions
    • Before designing an agent program, we need to know the possible percept and actions
    • By enabling a learning mechanism, the agent could have a degree of autonomy, such that it can reason and take decision

12

13 of 38

Program Skeleton of Agent

13

function SKELETON-AGENT (percept) returns action

static: knowledge, the agent’s memory of the world

knowledge🡨 UPDATE-KNOWLEDGE(knowledge, percept)

action 🡨 SELECT-BEST-ACTION(knowledge)

knowledge🡨 UPDATE-KNOWLEDGE (knowledge, action)

return action

  • On each invocation, the agent’s knowledge base is updated to reflect the new percept, the best action is chosen,
  • And the fact that the action taken is also stored in the knowledge base.

NOTE: Performance measure is not part of the agent

14 of 38

Intelligent Agents:

  • An intelligent agent is an autonomous entity which act upon an environment using sensors and actuators for achieving goals. An intelligent agent may learn from the environment to achieve their goals. A thermostat is an example of an intelligent agent.

Following are the main four rules for an AI agent:

  • Rule 1: An AI agent must have the ability to perceive the environment.
  • Rule 2: The observation must be used to make decisions.
  • Rule 3: Decision should result in an action.
  • Rule 4: The action taken by an AI agent must be a rational action.

14

15 of 38

Rational agent PEAS Representation

  • To design a rational agent, we must specify the task environment
  • PEAS descriptions define task environnent
  • PEAS is a type of model on which an AI agent works upon. When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words:

P: Performance measure

E: Environment

A: Actuators

S: Sensors

  • Here performance measure is the objective for the success of an agent's behavior.

16 of 38

PEAS for self-driving cars

  • Let's suppose a self-driving car then PEAS representation will be:
  • Performance: Safety, time, legal drive, comfort
  • Environment: Roads, other vehicles, road signs, pedestrian
  • Actuators: Steering, accelerator, brake, signal, horn
  • Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.

16

17 of 38

Examples of agents structure and sample PEAS

  • Agent: Medical diagnosis system
    • Environment: Patient, hospital, physician, nurses, …
    • Sensors: Keyboard (percept can be symptoms, findings, patient's answers)
    • Actuators: Screen display (action can be questions, tests, diagnoses, treatments, referrals)
    • Performance measure: Healthy patient, minimize costs, lawsuits

17

18 of 38

Classes of Environments

  • Actions are done by the agent on the environment. Environments provide percepts to an agent.
  • Agent perceives and acts in an environment. Hence in order to design a successful agent , the designer of the agent has to understand the type of the environment it interacts with.
  • Properties of Environments:
    • Fully observable vs. Partially observable
    • Deterministic vs. Stochastic
    • Episodic vs. Non-episodic
    • Static vs. Dynamic
    • Discrete vs. Continuous

18

19 of 38

Fully observable vs. partially observable

  • Does the agent’s sensory see the complete state of the environment?
    • If an agent has access to the complete state of the environment, then the environment is accessible or fully observable.
  • An environment is effectively accessible if the sensors detect all aspects that are relevant to the choice of action.

  • Taxi driving is partially observable
    • Any example of fully observable?

19

20 of 38

Deterministic vs. stochastic

  • Shows mapping from one state to another state for a given action
  • The environment is deterministic if the next state is completely determined by
    • The current state of the environment and
    • The actions selected by the agents.

  • Taxi driving is non-deterministic (i.e. stochastic)
    • Any example of deterministic ?

20

21 of 38

Episodic vs. Sequential

  • Does the next “episode” or event depend on the actions taken in previous episodes?
  • In an episodic environment, the agent's experience is divided into "episodes".
    • Each episode consists of the agent perceiving and then acting. The quality of its action depends just on the episode itself.
  • In a sequential environment the current decision could affect all future decisions

  • Taxi driving is sequential
    • Any example of Episodic?

21

22 of 38

Static vs. Dynamic

  • Can the world change while the agent is thinking?
    • If the environment can change while the agent is on purpose, then we say the environment is dynamic for that agent
    • otherwise it is static.

  • Taxi driving is dynamic
    • Any example of static?

22

23 of 38

Discrete vs. Continuous

  • Are the distinct percepts & actions limited or unlimited?
    • If there are a limited number of distinct, clearly defined percepts and actions, we say the environment is discrete.

  • Taxi driving is continuous

23

24 of 38

Environment Types

24

Problems

Observable

Deterministic

Episodic

Static

Discrete

Crossword�Puzzle

Yes

Yes

No

Yes

Yes

Part-picking�robot

No

No

Yes

No

No

Web shopping�program

No

No

No

No

Yes

Tutor

No

No

No

Yes

Yes

Medical Diagnosis

No

No

No

No

No

Taxi driving

No

No

No

No

No

  • Hardest case: an environment that is inaccessible, sequential, non-deterministic, dynamic, continuous.

Below are lists of properties of a number of familiar environments

25 of 38

Five types of agents

  1. Simple reflex agents
  2. Model-Based Reflex Agent
  3. Goal based agents
  4. Utility based agents
  5. Learning agent

26 of 38

Simple reflex agents

  • Works by finding a rule whose condition matches the current situation (as defined by the percept) and then doing the action associated with that rule.
    • It maps the current percept into proper action ignoring the history of percepts

E.g. Smart light bulb, smart thermostat,

* If the car in front brakes, and its brake lights come on, then the driver should notice this and initiate braking,

  • We call such a connection a condition-action rule written as: If car-in-front-is breaking then initiate-braking.
  • Humans also have such conditions. Some of which are learned responses. Some of which are inborn responses: Blinking when something approaches the eye.

26

Simple Reflex agent

27 of 38

Cont

  • Problems for the simple reflex agent design approach: They have very limited intelligence
  • They do not have knowledge of non-perceptual parts of the current state
  • Mostly too big to generate and to store.
  • Not adaptive to changes in the environment.

27

28 of 38

Model-based reflex agents

Model-Based Reflex Agent

  • This is a reflex agent with internal state. It keeps track of the percept history

  • It works by finding a rule whose condition matches the current situation (as defined by the percept and the stored internal state)

    • If the car is a recent model - there is a centrally mounted brake light. With older models, there is no centrally mounted, so what if the agent gets confused?

Is it a parking light? Is it a brake light? Is it a turn signal light?

    • Some sort of internal state should be in order to choose an action.

28

29 of 38

Cont

  • The Model-based agent can work in a partially observable environment, and track the situation.
  • A model-based agent has two important factors:
    • Model: It is knowledge about "how things happen in the world," so it is called a Model-based agent.
    • Internal State: It is a representation of the current state based on percept history.
  • These agents have the model, "which is knowledge of the world" and based on the model they perform actions.
  • Updating the agent state requires information about:
    • How the world evolves
    • How the agent's action affects the world.

29

30 of 38

Cont

  • Key difference (wrt simple reflex agents):
    • Agents have internal state, which is used to keep track of past states of the world.
    • Agents have the ability to represent change in the World.

function REFLEX-AGENT-WITH-STATE (percept) returns action

static: state, a description of the current world state

rules, a set of condition-action rules

state 🡨 UPDATE-STATE (state, percept)

rule 🡨 RULE-MATCH (state, rules)

action 🡨 RULE-ACTION [rule]

state 🡨 UPDATE-STATE (state, action)

return action

Thus a state based agent works as follows:

• information comes from sensors - percepts

• based on this, the agent changes the current state of the world

• based on state of the world and knowledge (memory), it triggers actions through the effectors

  1. Simple reflex agents
  2. Model-Based Reflex Agent
  3. Goal based agents
  4. Utility based agents
  5. Learning agent

31 of 38

Goal-based agents

  • The knowledge of the current state environment is not always sufficient to decide for an agent to what to do.
  • The agent needs to know its goal which describes desirable situations.
  • Goal-based agents expand the capabilities of the model-based agent by having the "goal" information.
  • They choose an action, so that they can achieve the goal.
  • These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not. Such considerations of different scenario are called searching and planning, which makes an agent proactive.

31

32 of 38

Cont.

Goal based agents

  • An agent with explicit goals => Choose actions that achieve the goal
  • Involves consideration of the future:
      • Knowing about the current state of the environment is not always enough to decide what to do.
    • E.g. 3x3 numbers sorting game
    • E.g. At a road junction, the taxi can turn left, right or go straight.
      • The right decision depends on where the taxi is trying to get to. As well as a current state description, the agent needs some sort of goal information

E.g. being at the passenger's destination.

32

33 of 38

Structure of a Goal-based agent

33

function GOAL_BASED_AGENT (percept) returns action

state 🡨 UPDATE-STATE (state, percept)

action 🡨 SELECT-ACTION [state, goal]

state 🡨 UPDATE-STATE (state, action)

return action

34 of 38

Utility based agents

Utility based agents

  • Goals are not really enough to generate high quality behavior.
  • E.g. route recommendation system
    • There are many action sequences that will get the taxi to its destination, thereby achieving the goal. Some are quicker, safer, more reliable, or cheaper than others. We need to consider Speed and safety
  • Utility provides a way in which the chances of success can be weighed up counter to the importance of the goals.
  • An agent that possesses an explicit utility function can make rational decisions.

34

35 of 38

Structure of a utility-based agent

35

function UTILITY_BASED_AGENT (percept) returns action

state 🡨 UPDATE-STATE (state, percept)

action 🡨 SELECT-OPTIMAL_ACTION [state, goal]

state 🡨 UPDATE-STATE (state, action)

return action

36 of 38

Learning agents

  • A learning agent in AI is the type of agent which can learn from its past experiences, or it has learning capabilities.
  • It starts to act with basic knowledge and then able to act and adapt automatically through learning.
  • A learning agent has mainly four conceptual components, which are:
    • Learning element: It is responsible for making improvements by learning from environment
    • Critic: Learning element takes feedback from critic which describes that how well the agent is doing with respect to a fixed performance standard.
    • Performance element: It is responsible for selecting external action
    • Problem generator: This component is responsible for suggesting actions that will lead to new and informative experiences.
  • Hence, learning agents are able to learn, analyze performance, and look for new ways to improve the performance.

37 of 38

Learning Agent

37

38 of 38

Thank You!

?