1 of 32

© Copyright National University of Singapore. All Rights Reserved.

2 of 32

© Copyright National University of Singapore. All Rights Reserved.

3 of 32

© Copyright National University of Singapore. All Rights Reserved.

4 of 32

Topics Covered

  • Background and Motivation
  • Scenarios and Metrics
  • Evaluation Performance
  • PGM
  • Enhanced Performance and Analysis
  • Conclusion

© Copyright National University of Singapore. All Rights Reserved.

5 of 32

Advanced Capabilities of LLMs

Reasoning and Planning abilities within LLMs

Reflexion

Finetune LLM to utilize the tools

© Copyright National University of Singapore. All Rights Reserved.

6 of 32

Typical Multi-agent Environments

Camel(NeuIPS 2023, KAUST)

how role-playing can be used to generate conversational data for studying the behaviors and capabilities of chat agents

Voyager: the first embodied lifelong learning agent based on LLM, which can continuously explore the world in Minecraft, acquire diverse skills, and achieve new tasks without human intervention.

ChatDev(THU):

Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)

Generative Agents: Interactive Simulacra of Human Behavior (Stanford):

A virtual world composed of 25 ChatGPTs

© Copyright National University of Singapore. All Rights Reserved.

7 of 32

Related Work

SmarPlay : A Benchmark For LLMs as Intellegent Agents

Evaluating LLMs as intelligent agents, featuring six diverse games to assess key capabilities, providing a roadmap for identifying gaps in current methodologies

AgenetBench: Evaluating LLMs as Agents

© Copyright National University of Singapore. All Rights Reserved.

8 of 32

Related Work

Multi-agent Negotiation Games

DynaEval: A Dynamic Interaction-based Evaluation Framework for Assessing LLMs in Real-world Scenario

Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback (Bargain)

Playing repeated games with Large Language Models

Prisoner’s Dilemma, Behavioral game theory, Economics

© Copyright National University of Singapore. All Rights Reserved.

9 of 32

Motivation

  • Decision making is complex as each agent is limited in local perspective.
  • Each agent must understand and predict the actions of other agents, which requires them to possess highly developed cognitive abilities.
  • Each agent need collaborate with other agents. Rationality enables them to make decisions based on logic and evidence, rather than relying on blind action.
  • Each agent needs to have the adaptability to quickly adjust their strategies in response to new situations.
  • The system is inherently dynamic, characterized by its ever-changing nature.
  • The collaboration among agents is highly demanding for the final target.

© Copyright National University of Singapore. All Rights Reserved.

10 of 32

Benchmarking Overview

5 scnarios, 5 large language models, 7 metrics

© Copyright National University of Singapore. All Rights Reserved.

11 of 32

Scenarios

In the game of Chameleon and Undercover, quickly comprehending global information and making corresponding actions are the keys to winning the game.

We mainly measure the cognition (judegement and reasoning) and adaptability (Deception and Self-Awareness) in these two scenarios

Play 1 round game at 20 topics settings, vote and guess the secrete word.

Play 2 round game at 20 topics settings

© Copyright National University of Singapore. All Rights Reserved.

12 of 32

Scenarios

Game theory scenarios require the agent to make optimal decisions based on the given premise, they are more apt for reflecting rationality and collaboration (Cooperation and Coordination).

5 turns game and 21 Competitions

© Copyright National University of Singapore. All Rights Reserved.

13 of 32

Scenarios

Game theory scenarios require the agent to make optimal decisions based on the given premise, they are more apt for reflecting rationality and collaboration (Cooperation and Coordination).

5 turns game and 21 Competitions

  1. All Cooperate: Each gets 3 points.
  2. Two Cooperate, One Betrays: The betrayer gets 5 points; cooperators get 2 points each.
  3. One Cooperates, Two Betray: The cooperator gets 2 points; betrayers get 3 points each.
  4. All Betray: Each gets 2 points.

Total cost should be shared…..

Proposal A: …

Proposal B: …

Proposal C: …

© Copyright National University of Singapore. All Rights Reserved.

14 of 32

Evaluation Settings

© Copyright National University of Singapore. All Rights Reserved.

15 of 32

© Copyright National University of Singapore. All Rights Reserved.

16 of 32

Metrics

Judgement

Reasoning

Win rate

Inter-Analysis

Gold Reasoning

Who is the chameleon

© Copyright National University of Singapore. All Rights Reserved.

17 of 32

Metrics

Self-awareness

Decepetion

As the chameleon, you guess the secret word!

as undercover won the game

Tell me which Player you are and what is your role in the game.

I am the undercover

I am the non-chameleon

© Copyright National University of Singapore. All Rights Reserved.

18 of 32

Metrics

Rationality

Cooperation

Coordination

© Copyright National University of Singapore. All Rights Reserved.

19 of 32

Performance

  • The most prominent performer is the GPT-4-turbo method, showcasing outstanding overall performance with a remarkable win rate of 56.5%.
  • Following closely is GPT-4, which achieves a win rate of 49.3%, demonstrating its competitiveness. While GPT-3.5-turbo remains superior to LLaMa-2-70B.
  • We also assess other popular commercial LLMs such as PaLM 2, Claude 2, and Cohere, the experimental results indicate their abilities in multi-agent settings are between GPT-3.5-turbo and Llama-2-70B.

© Copyright National University of Singapore. All Rights Reserved.

20 of 32

Performance

The radar diagram on the left illustrates the performance of LLMs across various metrics. In the figure, "-T" denotes "-turbo", and "+P" denotes that the model has been augmented with PGM. The bar chart on the right denotes the area occupied in the radar diagram and the red line plots the average winning rates in all games.

It is clearly observed that the larger the area occupied in the radar diagram, the higher the winning rates are. This justifies that the proposed evaluation metrics are good to reflect the capability of the language models

© Copyright National University of Singapore. All Rights Reserved.

21 of 32

PGM (Probabilistic Graphical Modeling)-enhancement

very shocked

© Copyright National University of Singapore. All Rights Reserved.

22 of 32

PGM Construction

very shocked

[Player 1->all]: My clue is: The secret word is a small, round fruit that is commonly red or green in color. …..

[Player 2->all]: My clue is the fruit is sweet and smooth, it can be peeled.

[Player 3->['Player 3']]: PGM:

Player 1 -> [0.5,0.3,0.2]

Player 2 -> [0.4,0.3,0.3]

Player 3 (myself) -> [0.3,0.4,0.3]

  • Probability-based or Others?
  • Text?

As Player 3, in my own perspective, I am a non-chameleon.

I first evaluate from my own perspective:

Player 1 is more suspicious because the clue given is generic …

Player 2 is less suspicious can be peeled suits for banana.

As for other players' perspectives:

I think now Player 1 thinks:

Player 3 is more suspicious because I have not given a clue yet.

Player 2 is also more suspicious because…

I think now Player 2 thinks:

Player 1 is more suspicious because the fruits is different…

Player 3 is no change because I have not given a clue yet.

Secret code: banana

 

 

 

© Copyright National University of Singapore. All Rights Reserved.

23 of 32

LLM Inference with PGM

very shocked

or

PGM

© Copyright National University of Singapore. All Rights Reserved.

24 of 32

PGM Prompt

© Copyright National University of Singapore. All Rights Reserved.

25 of 32

PGM Enhanced Performance

  • When assessing metrics related to collaboration, coordination, and rationality, GPT-4+PGM continued to shine. It achieved a perfect score of 100% in Coordination and a sub-optimal performance of 76.2% in Rationality.
  • In contrast, LLaMa-2-70B, while lagging in overall performance with a win rate of 18.1%, exhibited strengths in specific metrics, such as a relatively high consistency score of 74.0%. This score also surpasses GPT-3.5-turbo’s 54%.
  • It also confirms that our PGM enhancement boosts the inherent abilities of all selected models by 45% on average.

© Copyright National University of Singapore. All Rights Reserved.

26 of 32

Analysis

We can find that when versus GPT-4 as non-chameleons, GPT-4+PGM wins the game while GPT-4 itself loses the game.

As for GPT-3.5-turbo, the result changed from lose to even vote after being enhanced by PGM.

© Copyright National University of Singapore. All Rights Reserved.

27 of 32

  • Collaboration and Cost
  • Rationality and Repay

Analysis

Promote agreement and reduce their cost

  • GPT-3.5-turbo won in Win Rate while GPT-4 won in Cost
  • PGM increases the Win Rate and reduces the cost

When most of the players are playing rationally, the scores and payback will be much lower, thus approaching the well-known Nash Equilibrium

- GPT-4+PGM made more rational decisions than GPT-3.5-turbo+PGM

© Copyright National University of Singapore. All Rights Reserved.

28 of 32

Analysis

LLM awareness of arithmetic

LLM behaviors with varying topic settings.

Except for the model GPT-4, the average amount of total investments of these LLMs almost all exceed 100

Chose the multipliers, but the decisions of LLM are still stable

© Copyright National University of Singapore. All Rights Reserved.

29 of 32

https://zhuanlan.zhihu.com/p/667111690

https://zhiyuanhubj.github.io/MAgIC/

© Copyright National University of Singapore. All Rights Reserved.

30 of 32

https://github.com/cathyxl/MAgIC

https://pypi.org/project/MAgIC-LLM/0.4.0/

https://staging.eval.ai/web/challenges/challenge-page/795/overview

© Copyright National University of Singapore. All Rights Reserved.

31 of 32

Takeaway Insights

  1. Novel Evaluation Framework: Introduced a Competition-based benchmarking framework to assess LLMs in multi-agent settings, evaluating reasoning, planning, collaboration, etc.
  2. Diverse Testing Environments: Used games like Chameleon and Undercover, along with scenarios like Cost Sharing and Multi-player Prisoner's Dilemma, to create varied testing situations.
  3. Quantitative Model Evaluation: Evaluated seven multi-agent systems powered by different LLMs, revealing a significant capability gap of over threefold between the strongest (GPT-4) and weakest (Llama-2-70B) models.
  4. PGM Enhancement: Fortified the framework with Probabilistic Graphical Modeling (PGM) to improve LLMs' capabilities in complex social and cognitive dimensions.
  5. Average Performance Improvement: The PGM enhancement showed an average 45% improvement in the inherent abilities of all selected models.

© Copyright National University of Singapore. All Rights Reserved.

32 of 32

Access our model space of Bubo-GPT via

Scan me to access our Main Page !

© Copyright National University of Singapore. All Rights Reserved.