© Copyright National University of Singapore. All Rights Reserved.
© Copyright National University of Singapore. All Rights Reserved.
© Copyright National University of Singapore. All Rights Reserved.
Topics Covered
© Copyright National University of Singapore. All Rights Reserved.
Advanced Capabilities of LLMs
Reasoning and Planning abilities within LLMs
Reflexion
Finetune LLM to utilize the tools
© Copyright National University of Singapore. All Rights Reserved.
Typical Multi-agent Environments
Camel(NeuIPS 2023, KAUST)
how role-playing can be used to generate conversational data for studying the behaviors and capabilities of chat agents
Voyager: the first embodied lifelong learning agent based on LLM, which can continuously explore the world in Minecraft, acquire diverse skills, and achieve new tasks without human intervention.
ChatDev(THU):
Create Customized Software using Natural Language Idea (through LLM-powered Multi-Agent Collaboration)
Generative Agents: Interactive Simulacra of Human Behavior (Stanford):
A virtual world composed of 25 ChatGPTs
© Copyright National University of Singapore. All Rights Reserved.
Related Work
SmarPlay : A Benchmark For LLMs as Intellegent Agents
Evaluating LLMs as intelligent agents, featuring six diverse games to assess key capabilities, providing a roadmap for identifying gaps in current methodologies
AgenetBench: Evaluating LLMs as Agents
© Copyright National University of Singapore. All Rights Reserved.
Related Work
Multi-agent Negotiation Games
DynaEval: A Dynamic Interaction-based Evaluation Framework for Assessing LLMs in Real-world Scenario
Improving Language Model Negotiation with Self-Play and In-Context Learning from AI Feedback (Bargain)
Playing repeated games with Large Language Models
Prisoner’s Dilemma, Behavioral game theory, Economics
© Copyright National University of Singapore. All Rights Reserved.
Motivation
© Copyright National University of Singapore. All Rights Reserved.
Benchmarking Overview
5 scnarios, 5 large language models, 7 metrics
© Copyright National University of Singapore. All Rights Reserved.
Scenarios
In the game of Chameleon and Undercover, quickly comprehending global information and making corresponding actions are the keys to winning the game.
We mainly measure the cognition (judegement and reasoning) and adaptability (Deception and Self-Awareness) in these two scenarios
Play 1 round game at 20 topics settings, vote and guess the secrete word.
Play 2 round game at 20 topics settings
© Copyright National University of Singapore. All Rights Reserved.
Scenarios
Game theory scenarios require the agent to make optimal decisions based on the given premise, they are more apt for reflecting rationality and collaboration (Cooperation and Coordination).
5 turns game and 21 Competitions
© Copyright National University of Singapore. All Rights Reserved.
Scenarios
Game theory scenarios require the agent to make optimal decisions based on the given premise, they are more apt for reflecting rationality and collaboration (Cooperation and Coordination).
5 turns game and 21 Competitions
Total cost should be shared…..
Proposal A: …
Proposal B: …
Proposal C: …
© Copyright National University of Singapore. All Rights Reserved.
Evaluation Settings
© Copyright National University of Singapore. All Rights Reserved.
© Copyright National University of Singapore. All Rights Reserved.
Metrics
Judgement
Reasoning
Win rate
Inter-Analysis
Gold Reasoning
Who is the chameleon
© Copyright National University of Singapore. All Rights Reserved.
Metrics
Self-awareness
Decepetion
As the chameleon, you guess the secret word!
as undercover won the game
Tell me which Player you are and what is your role in the game.
I am the undercover
I am the non-chameleon
© Copyright National University of Singapore. All Rights Reserved.
Metrics
Rationality
Cooperation
Coordination
© Copyright National University of Singapore. All Rights Reserved.
Performance
© Copyright National University of Singapore. All Rights Reserved.
Performance
The radar diagram on the left illustrates the performance of LLMs across various metrics. In the figure, "-T" denotes "-turbo", and "+P" denotes that the model has been augmented with PGM. The bar chart on the right denotes the area occupied in the radar diagram and the red line plots the average winning rates in all games.
It is clearly observed that the larger the area occupied in the radar diagram, the higher the winning rates are. This justifies that the proposed evaluation metrics are good to reflect the capability of the language models
© Copyright National University of Singapore. All Rights Reserved.
PGM (Probabilistic Graphical Modeling)-enhancement
very shocked
© Copyright National University of Singapore. All Rights Reserved.
PGM Construction
very shocked
[Player 1->all]: My clue is: The secret word is a small, round fruit that is commonly red or green in color. …..
[Player 2->all]: My clue is the fruit is sweet and smooth, it can be peeled.
[Player 3->['Player 3']]: PGM:
Player 1 -> [0.5,0.3,0.2]
Player 2 -> [0.4,0.3,0.3]
Player 3 (myself) -> [0.3,0.4,0.3]
As Player 3, in my own perspective, I am a non-chameleon.
I first evaluate from my own perspective:
Player 1 is more suspicious because the clue given is generic …
Player 2 is less suspicious can be peeled suits for banana.
As for other players' perspectives:
I think now Player 1 thinks:
Player 3 is more suspicious because I have not given a clue yet.
Player 2 is also more suspicious because…
I think now Player 2 thinks:
Player 1 is more suspicious because the fruits is different…
Player 3 is no change because I have not given a clue yet.
Secret code: banana
© Copyright National University of Singapore. All Rights Reserved.
LLM Inference with PGM
very shocked
or
PGM
© Copyright National University of Singapore. All Rights Reserved.
PGM Prompt
© Copyright National University of Singapore. All Rights Reserved.
PGM Enhanced Performance
© Copyright National University of Singapore. All Rights Reserved.
Analysis
We can find that when versus GPT-4 as non-chameleons, GPT-4+PGM wins the game while GPT-4 itself loses the game.
As for GPT-3.5-turbo, the result changed from lose to even vote after being enhanced by PGM.
© Copyright National University of Singapore. All Rights Reserved.
Analysis
Promote agreement and reduce their cost
When most of the players are playing rationally, the scores and payback will be much lower, thus approaching the well-known Nash Equilibrium
- GPT-4+PGM made more rational decisions than GPT-3.5-turbo+PGM
© Copyright National University of Singapore. All Rights Reserved.
Analysis
LLM awareness of arithmetic
LLM behaviors with varying topic settings.
Except for the model GPT-4, the average amount of total investments of these LLMs almost all exceed 100
Chose the multipliers, but the decisions of LLM are still stable
© Copyright National University of Singapore. All Rights Reserved.
https://zhuanlan.zhihu.com/p/667111690
https://zhiyuanhubj.github.io/MAgIC/
© Copyright National University of Singapore. All Rights Reserved.
https://github.com/cathyxl/MAgIC
https://pypi.org/project/MAgIC-LLM/0.4.0/
https://staging.eval.ai/web/challenges/challenge-page/795/overview
© Copyright National University of Singapore. All Rights Reserved.
Takeaway Insights
© Copyright National University of Singapore. All Rights Reserved.
Access our model space of Bubo-GPT via
Scan me to access our Main Page !
© Copyright National University of Singapore. All Rights Reserved.