1 of 30

2 of 30

3 of 30

4 of 30

5 of 30

6 of 30

7 of 30

8 of 30

9 of 30

10 of 30

11 of 30

12 of 30

13 of 30

14 of 30

15 of 30

16 of 30

17 of 30

18 of 30

19 of 30

20 of 30

Evolution of Intelligence

Life survives by learning to predict its future

Collective Intelligence

‣ Language ‣ Sharing ‣ Craft

Artificial Intelligence

‣ Logic ‣ Simulation ‣ Learning

Stone AgeCommunicate

RenaissanceReason

Digital AgeCompute

Adapted from Prof. Yi Ma (Slides 20-28)

21 of 30

The Decade of Origin => Machine Learning

1940s — Laying the Groundwork for Machine Intelligence

  • 1943 · Artificial Neural Networks — McCulloch & Pitts
    • First mathematical model of a neuron
  • 1944 · Game Theory — John von Neumann
    • Framework for rational decision-making among agents
  • 1948 · Cybernetics — Norbert Wiener
    • Introduced feedback and control loops
  • 1948 · Information Theory — Claude Shannon
    • Quantified information, noise, and entropy
  • 1950 · Turing Machine & Turing Test — Alan Turing
    • Established the limits of universal computation and proposed a practical benchmark for machine “intelligence”

22 of 30

Learning from Nature: Neurons & Nets

1888 Golgi & Cajal

Mapped the neuron’s dendrites

1943 McCulloch & Pitts

First math model (perceptrons)

1959 Hubel & Wiesel

Visual cortex

1980 Fukushima & 1989 LeCun

Added convolution + pooling

23 of 30

History of Machine Learning

Figure credit: Prof. René Vidal

24 of 30

Modern Evolution of Deep Neural Networks

Credit: Prof. Yi Ma

25 of 30

The Future: Black-Box to White-Box AI

Modern AI systems are still heuristic black boxes—blocking explanation, safety guarantees, and rapid iteration—which is particularly concerning as these systems are growing to be widespread and used in high-stakes applications that demand interpretability.

What to Learn?

How to Learn?

Why Correct?

  • Learn only what is predictable
  • Reveal low-dimensional structure
  • Maximize true information gain
  • Unroll iterative optimization
  • Compress to expose structure
  • Each layer = one interpretable operation
  • Encode ↔ Decode
  • Continual self-validation loops and learning
  • Closed-loop feedback for error correction

26 of 30

What is Intelligence?

Definition: An intelligent system is one that has the mechanisms for self-correcting and self-improving its existing knowledge (or information).

If your “speed of learning” (intelligence) stays high for long, your “distance traveled in learning” (knowledge) piles up. Conversely, at any moment the slope of your knowledge-vs-time graph—that is, how steeply it’s climbing—represents your intelligence.

A system without self-correction or self-improvement—no matter how large—lacks intelligence.

VS

Who has intelligence, �Who has knowledge?

27 of 30

Winner

28 of 30

The Era of Heros

1940s�“Animal” Intelligence

  • Signal processing & information representation
  • Prediction and error-correction feedback loops
  • Optimal control of dynamical systems
  • Game-theoretic reasoning for strategic behavior

1956�Unique to Human

Today’s AI�Animal ∨ Human?

  • Abstraction of concepts and symbols
  • Formal logic and deductive inference
  • Causal reasoning about events
  • Hypothesis generation and experimental testing
  • General problem-solving frameworks
  • Denoising and representation learning
  • Data compression
  • Object recognition
  • Generative models for images, audio, and text
  • Text generation (LLMs)
  • Reinforcement learning for sequential decision-making
  • Many many more…

29 of 30

30 of 30