Milestones in AI history
Test for machine intelligence (1950):
Alan Turing’s idea was to understand if the machine can think accordingly and make decisions as rationally and intelligently as a human being.
In the test, an interrogator has to figure out which answer belongs to a human and which one to a machine.
So, if the interrogator wouldn’t be able to distinguish between the two, the machine would pass the test of being indistinguishable from a human being.
�
The father of AI – John McArthy (1995)
John McCarthy, an American Computer Scientist, coined the term Artificial Intelligence in his proposal for the Dartmouth Conference, the first-ever AI conference held in 1956.
The objective was to design a machine that would be capable of thinking and reasoning like a human. He believed that this scientific breakthrough would unquestionably happen within 5-5000 years.
��
The first chatbot – Eliza (1964)
Eliza – the first-ever chatbot was invented in the 1960s by Joseph Wiezenbaum at the Artificial Intelligence Laboratory at MIT. Eliza is a psychotherapeutic robot that gives pre-fed responses to the users. Such that, they feel they are talking to someone who understands their problems.
���
Deep Blue (1997)
IBM's Deep Blue chess computer defeated reigning world champion Garry Kasparov, heralding the potential for AI to outperform humans in complex strategic games.���
Voice recognition on Iphone and Siri (2008)
This advancement gave users the power to quite literally *voice* their queries and concerns.��
AlphaGo (2016)
Developed by DeepMind, AlphaGo became the first AI program to defeat a world champion Go player, Lee Sedol. This achievement was considered a significant milestone in AI, as Go was long regarded as a game too complex for computers to master.��
Generative AI (2020’s)
OpenAI launched ChatGPT on 30th November, 2022 - an advanced AI chatbot that could connect with users like a real human.�
Generative AI�
Creative Systems & Intelligent Workflows
Overview & Requirements
AI Landscape
Image Generation | Audio | AI Video | LLM’s
COMFYUI
LARGE LANGUAGE MODELS : Part 1
Course Overview and Structure�Key modules and learning outcomes
Learning Objectives
Key Modules
Attention is �All You Need!�
GPT | Generative Pre-Trained Transformer
Understanding Large Language Models
Large Language Models (LLMs) utilize pattern prediction to generate text, relying on tokens for language representation. By analyzing context, they understand and produce coherent responses, facilitating advanced interaction.
The Mechanics of Transformers Explained
1. Attention Mechanism
Enables models to focus on relevant input data.
�2. Layer Normalization
Stabilizes training and improves model performance.
Training Stages
LLM = 1TB Lossy, probabilistic “zip file of the internet”
Parameters store world knowledge, though usually out of date by a few months
Pre-training: $10M, 3 months of training on internet documents
Post-training: Cheaper finetuning with RLHF, RL on Conversations
LLM Key Concepts
LLM Limitations: The 3 Critical Failures
Transformers
"The cat chased the mouse, but it ran away."
Prompt Engineering�Prompting is the process of providing specific instructions to a Gen AI tool to receive new information or to achieve a desired outcome on a task. (text | images | videos | sound | code)
The 4-Step prompting cycle
1. Be Specific with Context
2. Provide Examples
3.Request Reasoning
4. Iterate & Refine
Prompting Techniques: 2025
5 STEP FRAMEWORK [Google]
4 ITERATION METHODS��
Example: Good vs Poor Prompts
Developer Use Cases in LLMs
Code Intelligence & Engineering Support
Software Design, Architecture, and Documentation
🚀 TL;DR : LLM’s help developers:
Write, reason, document, debug, test, integrate, research, automate, and design faster!�But the skill lies in knowing when to hand off, when to verify, and when to override.
DEMO TIME