1 of 49

Agentic Interactions

Alex Imas

Behavioral Science, Economics, and Applied AI

University of Chicago Booth School

alex.imas@chicagobooth.edu

with Kevin Lee (UMichigan) and Sanjog Misra (Booth)

2 of 49

Builds on

3 of 49

All of economics is about agentic interactions.

4 of 49

Economics of agents

…knowledge about the most efficient arrangements is not known to anyone in advance. On the contrary, it is generated by the interaction of the economic agents once they are free to interact under the market system and made possible by the framework of the rule of law. …

- Hayek

agent

action

principal

instruction

outcome

agent

action

principal

instruction

5 of 49

6 of 49

Agentic AI is poised to transform economic interactions across society.��

7 of 49

8 of 49

9 of 49

10 of 49

What is an ”AI agent”?

11 of 49

What is an ”AI agent”?

12 of 49

What is an ”AI agent”?

agent

(foundation model)

action

13 of 49

“AI is like electricity — it doesn’t have opinions, only optimization functions.”

-Andrew Ng

14 of 49

“AI doesn’t get tired, it doesn’t get irritable, and it doesn’t make mistakes through carelessness — that alone makes it a better decision-maker in many domains.” - Musk

“An artificial agent need not be wiser than us in every respect; being free from bias may be enough to make it superior in decision-making.”

- Bostrom

“Once we can teach machines to learn from data better than we do, they stop being tools — they become our most objective analysts.” - Domingos

15 of 49

“AI doesn’t get tired, it doesn’t get irritable, and it doesn’t make mistakes through carelessness — that alone makes it a better decision-maker in many domains.” - Musk

“An artificial agent need not be wiser than us in every respect; being free from bias may be enough to make it superior in decision-making.”

- Bostrom

“Once we can teach machines to learn from data better than we do, they stop being tools — they become our most objective analysts.” - Domingos

16 of 49

One potential outcome: Homogeneity predicted by representative-agent models will emerge in the economy

Representative AI agent

17 of 49

What is missing from this discussion?

18 of 49

Key Ingredient

19 of 49

agent

action

principal

instruction

20 of 49

21 of 49

foundation priors: any output of foundation models are draws from a subjective, informative, prior predictive density.

22 of 49

The humanness of AI agents

23 of 49

In economics, principal gives agent contract, but it will be incomplete. � - Cannot give all instructions for every contingency� - Subjective� �Once you frame it as principal-agent problem, everything hinges on the contract, i.e., the prompt. � - Similar agency issues arise (black box objective function)� - Different types of principals will generate different contracts�

Principal-“agent” model

24 of 49

��Prompt (contract) written based on anticipated outcome. ��Greater subjectivity through iterative process will generate greater correlation between agent behavior and principal’s traits.��Human traits will be reflected in prompt. Biases of principal will impact contract.��

agent

action

principal

instruction

outcome

Principal-”agent” model

25 of 49

Hypothesis: Outcomes in agentic interactions will be a function of human heterogeneity. Of individual differences in ability, effort, biases, traits and characteristics.

26 of 49

The experiment

27 of 49

Setup

  • Participants asked to write instructions for an AI agent that would negotiate on their behalf in a used car negotiation over multiple rounds with other participants’ AI agents.

  • Each subject wrote prompts for both Buyer and Seller roles.

  • Negotiation parameters

• Vehicle: 2020 Toyota Camry LE

• Mileage: 45,000 miles

• Blue book value range: $18,000 - $22,000

• Location: Chicago metropolitan area

• Car history: No accidents reported

28 of 49

Setup

  • Participants instructed to write a “strategic playbook” that the AI agent would utilize for varying values of the negotiation parameters

  • Specifically, their prompt should allow their AI agent to perform well in a variety of different situations.

  • Given “good” and “bad” examples.

29 of 49

30 of 49

Setup

  • Goal: write a strategically effective prompt: the higher the surplus the AI-agent gets, the higher the bonus.

  • Surplus was defined as the difference between the parties’ outside options: the Seller’s trade-in value ($18,000) and the Buyer’s dealer price ($22,000).

  • If no agreement was reached, both have to resort to outside option (surplus = 0)

  • Bonus calculated as the average of both Buyer and Seller surpluses (and divided by 1,000)

31 of 49

Why is this a good setup?

  • Participants are “endowed” with preferences: maximize surplus.

  • Individual heterogeneity in preferences, e.g,, risk aversion, different tastes, should not be reflected in outcomes.

  • Everyone has the exact same goal.

32 of 49

Setup

  • Upon writing the prompts, they could simulate what the negotiation outcomes would look like across rounds.

  • Could test as many times as they wanted before submitting final prompts.

33 of 49

34 of 49

Human Negotiation Task

  • Same parameters and general instructions as the AI-agent negotiation task

  • Negotiations ended either when an agreement was reached or when the maximum number of rounds was reached, bonus payment surplus-dependent.

  • Participants assigned either Buyer or Seller role and paired with another participant in the complementary role.

  • Real-time negotiation, multi-stage ultimatum game format, as in AI agent case.

35 of 49

outcomes

Spike at 0

Spike at 50-50

36 of 49

What explains heterogeneity?

  • Null hypothesis of “objective” agents: variation is generated by stochasticity of the language model.

  • If you use identical prompts, you will still get a distribution. Null hypothesis is just stochasticity in generative process

37 of 49

identities matter …

38 of 49

prompts matter …

39 of 49

Individual Characteristics

  • Explains ~16.9%
    • note 63% explained by embeddings

40 of 49

Individual Covariates

41 of 49

Humans act different…

…but level of heterogeneity is similar, if not smaller than with AI agents.

42 of 49

Machine fluency

Predictable variation in outcomes

43 of 49

Better models?

…not much of a difference.

44 of 49

Remark#1

Human heterogeneity becomes economic infrastructure — individual differences shape AI-driven outcomes at scale.

45 of 49

Remark#2

Specification hazard— incomplete contracts will feature “black box” objective functions. Outcomes will be less a function of structuring incentives, and more about alignment.

46 of 49

Remark#3

Welfare and policy must evolve — designing equitable AI systems requires acknowledging and governing inherited human variation. Machine fluency may be new source of inequity.

47 of 49

Remark#4

Human diversity, experience, and ingenuity can be also transferred to AI. Our agents extend our creativity, adaptability, and capacity for good.

48 of 49

�Three entities: Sellers, buyers, platform. There is subjectivity in the platform too.��Next phase: agents interacting with humans. ���

Discussion

49 of 49

THANK YOU.

alex.imas@chicagobooth.edu