Agentic Interactions
Alex Imas
Behavioral Science, Economics, and Applied AI
University of Chicago Booth School
alex.imas@chicagobooth.edu
with Kevin Lee (UMichigan) and Sanjog Misra (Booth)
Builds on
�All of economics is about agentic interactions.
Economics of agents
…knowledge about the most efficient arrangements is not known to anyone in advance. On the contrary, it is generated by the interaction of the economic agents once they are free to interact under the market system and made possible by the framework of the rule of law. …
- Hayek
agent
action
principal
instruction
outcome
agent
action
principal
instruction
�Agentic AI is poised to transform economic interactions across society.��
What is an ”AI agent”?
What is an ”AI agent”?
What is an ”AI agent”?
agent
(foundation model)
action
“AI is like electricity — it doesn’t have opinions, only optimization functions.”
-Andrew Ng
“AI doesn’t get tired, it doesn’t get irritable, and it doesn’t make mistakes through carelessness — that alone makes it a better decision-maker in many domains.” - Musk
“An artificial agent need not be wiser than us in every respect; being free from bias may be enough to make it superior in decision-making.”
- Bostrom
“Once we can teach machines to learn from data better than we do, they stop being tools — they become our most objective analysts.” - Domingos
“
”
“AI doesn’t get tired, it doesn’t get irritable, and it doesn’t make mistakes through carelessness — that alone makes it a better decision-maker in many domains.” - Musk
“An artificial agent need not be wiser than us in every respect; being free from bias may be enough to make it superior in decision-making.”
- Bostrom
“Once we can teach machines to learn from data better than we do, they stop being tools — they become our most objective analysts.” - Domingos
“
”
One potential outcome: Homogeneity predicted by representative-agent models will emerge in the economy
Representative AI agent
What is missing from this discussion?
Key Ingredient
agent
action
principal
instruction
foundation priors: any output of foundation models are draws from a subjective, informative, prior predictive density.
The humanness of AI agents
In economics, principal gives agent contract, but it will be incomplete. � - Cannot give all instructions for every contingency� - Subjective� �Once you frame it as principal-agent problem, everything hinges on the contract, i.e., the prompt. � - Similar agency issues arise (black box objective function)� - Different types of principals will generate different contracts�
Principal-“agent” model
��Prompt (contract) written based on anticipated outcome. ��Greater subjectivity through iterative process will generate greater correlation between agent behavior and principal’s traits.��Human traits will be reflected in prompt. Biases of principal will impact contract.��
agent
action
principal
instruction
outcome
Principal-”agent” model
Hypothesis: Outcomes in agentic interactions will be a function of human heterogeneity. Of individual differences in ability, effort, biases, traits and characteristics.
The experiment
Setup
• Vehicle: 2020 Toyota Camry LE
• Mileage: 45,000 miles
• Blue book value range: $18,000 - $22,000
• Location: Chicago metropolitan area
• Car history: No accidents reported
Setup
Setup
Why is this a good setup?
Setup
Human Negotiation Task
outcomes
Spike at 0
Spike at 50-50
What explains heterogeneity?
identities matter …
prompts matter …
Individual Characteristics
Individual Covariates
Humans act different…
…but level of heterogeneity is similar, if not smaller than with AI agents.
Machine fluency
Predictable variation in outcomes
Better models?
…not much of a difference.
Remark#1
Human heterogeneity becomes economic infrastructure — individual differences shape AI-driven outcomes at scale.
Remark#2
Specification hazard— incomplete contracts will feature “black box” objective functions. Outcomes will be less a function of structuring incentives, and more about alignment.
Remark#3
Welfare and policy must evolve — designing equitable AI systems requires acknowledging and governing inherited human variation. Machine fluency may be new source of inequity.
Remark#4
Human diversity, experience, and ingenuity can be also transferred to AI. Our agents extend our creativity, adaptability, and capacity for good.
�Three entities: Sellers, buyers, platform. There is subjectivity in the platform too.��Next phase: agents interacting with humans. ���
Discussion
THANK YOU.
alex.imas@chicagobooth.edu