1 of 22

Prompting

2 of 22

Attendance

3 of 22

Generative Transformers

  • BERT is an “encoder” architecture, so it is only capable of converting sentences into embeddings
  • A “decoder” generates text from embeddings

4 of 22

5 of 22

6 of 22

7 of 22

8 of 22

9 of 22

Prompting Strategies

  • Adopt personas
  • Chain of thought
  • Reinforcement Augmented Generation
  • Provide examples (few shot)

10 of 22

Persona

“As a computer programmer”

“As a scientific expert”

“As a customer service chatbot”

...

11 of 22

12 of 22

13 of 22

Retrieval Augmented Generation

14 of 22

15 of 22

16 of 22

Hallucination

17 of 22

  1. The first article called “Prompt Engineering for ChatGPT” by Sabit Ekin talks about the importance of prompt engineering and how when doing this, it’s crucial to be as specific as you can so that ChatGPT can clearly understand the user’s intention with what they want answered. My question about this is what problems would this create for students that don’t speak English as their first language as they may not be able to communicate their questions as coherently compared to students, who speak English as their first language?

Pranathi

1. What are the potential dangers prompt engineering can bring about?

Alice

18 of 22

LLM agents

19 of 22

20 of 22

21 of 22

22 of 22

The things that I’m still unclear is on how researchers evaluate the “naturalness” of LLM conversations. I am kind of wondering what constitutes an interaction to feel real?

-Cortez

2. How are scripts even applicable to saying how capable AI is? Seems equivalent to taking a test with the answer key in front of you.

-Jacob

I was confused about how we can effectively evaluate naturalness in AGENTS mode conversations beyond turn length or human ratings. Is there a standardized way to quantify and calculate it more objectively?

-Hussain

-Do you think it is possible to simulate a realistic social situation with LLMs? Do you think we can truly recreate human consciousness within AI?

Devin