1 of 31

CC2: Computer System Architecture �B.Sc. Honours� First Semester

Amit Ghosh Roy

Assistant Professor

Department of Computer Science

Sukanta Mahavidyalaya

Date: 15/03/2022

2 of 31

Course Topics (Tentative)

  • Instruction set architecture
    • MIPS
  • Arithmetic operations & data
  • System performance
  • Processor
  • Pipelining to improve performance
  • Memory

3 of 31

Focus of the Discussion

  • How computers work
    • MIPS instruction set architecture
    • The implementation of MIPS instruction set architecture – MIPS processor design
  • Issues affecting modern processors
    • Pipelining – processor performance improvement
    • Cache – memory system, I/O systems

4 of 31

Why Learn Computer Architecture?

  • You want to call yourself a “computer scientist”
    • Computer architecture impacts every other aspect of computer science
  • You need to make a purchasing decision or offer “expert” advice
  • You want to build software people use – sell many, many copies-(need performance)
    • Both hardware and software affect performance
      • Algorithm determines number of source-level statements
      • Language/compiler/architecture determine machine instructions (Chapter 2 and 3)
      • Processor/memory determine how fast instructions are executed (Chapter 5, 6, and 7)
      • Assessing and understanding performance(Chapter 4)

5 of 31

Computer Systems

  • Software
    • Application software – Word Processors, Email, Internet Browsers, Games
    • Systems software – Compilers, Operating Systems
  • Hardware
    • CPU
    • Memory
    • I/O devices (mouse, keyboard, display, disks, networks,……..)

6 of 31

Software

O

p

e

r

a

t

i

n

g

s

y

s

t

e

m

s

A

p

p

l

i

c

a

t

i

o

n

s

s

o

f

t

w

a

r

e

l

a

T

E

X

V

i

r

t

u

a

l

m

e

m

o

r

y

F

i

l

e

s

y

s

t

e

m

I

/

O

d

e

v

i

c

e

d

r

i

v

e

r

s

A

s

s

e

m

b

l

e

r

s

a

s

C

o

m

p

i

l

e

r

s

g

c

c

S

y

s

t

e

m

s

s

o

f

t

w

a

r

e

S

o

f

t

w

a

r

e

7 of 31

How Do the Pieces Fit Together?

  • Coordination of many levels of abstraction
  • Under a rapidly changing set of forces
  • Design, measurement, and evaluation

I/O system

Instr. Set Proc.

Compiler

Operating

System

Application

Digital Design

Circuit Design

Instruction Set

Architecture

Firmware

Memory

system

Datapath & Control

8 of 31

Instruction Set Architecture

  • One of the most important abstractions is ISA
    • A critical interface between HW and SW
    • Example: MIPS
    • Desired properties
      • Convenience (from software side)
      • Efficiency (from hardware side)

SMV_AGR_CC2_CO

8

instruction set

software

hardware

D.Barbará

9 of 31

What is Computer Architecture

  • Programmer’s view: a pleasant environment
  • Operating system’s view: a set of resources (hw & sw)
  • System architecture view: a set of components
  • Compiler’s view: an instruction set architecture with OS help
  • Microprocessor architecture view: a set of functional units
  • VLSI designer’s view: a set of transistors implementing logic
  • Mechanical engineer’s view: a heater!

SMV_AGR_CC2_CO

9

D.Barbará

10 of 31

What is Computer Architecture

  • Patterson & Hennessy: Computer architecture = Instruction set architecture + Machine organization + Hardware
  • For this course, computer architecture mainly refers to ISA (Instruction Set Architecture)
    • Programmer-visible, serves as the boundary between the software and hardware
    • Modern ISA examples: MIPS, SPARC, PowerPC, DEC Alpha

SMV_AGR_CC2_CO

10

D.Barbará

11 of 31

Organization and Hardware

  • Organization: high-level aspects of a computer’s design
    • Principal components: memory, CPU, I/O, …
    • How components are interconnected
    • How information flows between components
    • E.g. AMD Opteron 64 and Intel Pentium 4: same ISA but different organizations
  • Hardware: detailed logic design and the packaging technology of a computer
    • E.g. Pentium 4 and Mobile Pentium 4: nearly identical organizations but different hardware details

SMV_AGR_CC2_CO

11

D.Barbará

12 of 31

Types of computers and their applications

  • Desktop
    • Run third-party software
    • Office to home applications
    • 30 years old
  • Servers
    • Modern version of what used to be called mainframes, minicomputers and supercomputers
    • Large workloads
    • Built using the same technology in desktops but higher capacity
      • Expandable
      • Scalable
      • Reliable
    • Large spectrum: from low-end (file storage, small businesses) to supercomputers (high end scientific and engineering applications)
      • Gigabytes to Terabytes to Petabytes of storage
    • Examples: file servers, web servers, database servers

13 of 31

Types of computers…

  • Embedded
    • Microprocessors everywhere! (washing machines, cell phones, automobiles, video games)
    • Run one or a few applications
    • Specialized hardware integrated with the application (not your common processor)
    • Usually stringent limitations (battery power)
    • High tolerance for failure (don’t want your airplane avionics to fail!)
    • Becoming ubiquitous
    • Engineered using processor cores
      • The core allows the engineer to integrate other functions into the processor for fabrication on the same chip
      • Using hardware description languages: Verilog, VHDL

14 of 31

Where is the Market?

Millions of Computers

15 of 31

In this class you will learn

  • How programs written in a high-level language (e.g., Java) translate into the language of the hardware and how the hardware executes them.
  • The interface between software and hardware and how software instructs hardware to perform the needed functions.
  • The factors that determine the performance of a program
  • The techniques that hardware designers employ to improve performance.

As a consequence, you will understand what features may make one computer design better than another for a particular application

16 of 31

High-level to Machine Language

High-level language program

(in C)

Assembly language program

(for MIPS)

Binary machine language program

(for MIPS)

Compiler

Assembler

17 of 31

Evolution…

  • In the beginning there were only bits… and people spent countless hours trying to program in machine language

01100011001 011001110100

  • Finally before everybody went insane, the assembler was invented: write in mnemonics called assembly language and let the assembler translate (a one to one translation)

Add A,B

  • This wasn’t for everybody, obviously… (imagine how modern applications would have been possible in assembly), so high-level language were born (and with them compilers to translate to assembly, a many-to-one translation)

C= A*(SQRT(B)+3.0)

18 of 31

THE BIG IDEA

  • Levels of abstraction: each layer provides its own (simplified) view and hides the details of the next.

19 of 31

Instruction Set Architecture (ISA)

  • ISA: An abstract interface between the hardware and the lowest level software of a machine that encompasses all the information necessary to write a machine language program that will run correctly, including instructions, registers, memory access, I/O, and so on.

“... the attributes of a [computing] system as seen by the programmer, i.e., the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls, the logic design, and the physical implementation.” – Amdahl, Blaauw, and Brooks, 1964

    • Enables implementations of varying cost and performance to run identical software
  • ABI (application binary interface): The user portion of the instruction set plus the operating system interfaces used by application programmers. Defines a standard for binary portability across computers.

20 of 31

ISA Type Sales

PowerPoint “comic” bar chart with approximate values (see text for correct values)

Millions of Processor

21 of 31

Organization of a computer

22 of 31

Anatomy of Computer

Personal Computer

Processor

Computer

Control

(“brain”)

Datapath

(“brawn”)

Memory

(where

programs,

data

live when

running)

Devices

Input

Output

Keyboard, �Mouse

Display, �Printer

Disk(where

programs,

data

live when

not running)

5 classic components

  • Datapath: performs arithmetic operation
  • Control: guides the operation of other components based on the user instructions

23 of 31

PC Motherboard Closeup

24 of 31

Inside the Pentium 4

25 of 31

Moore’s Law

  • In 1965, Gordon Moore predicted that the number of transistors that can be integrated on a die would double every 18 to 24 months (i.e., grow exponentially with time).

  • Amazingly visionary – million transistor/chip barrier was crossed in the 1980’s.
    • 2300 transistors, 1 MHz clock (Intel 4004) - 1971
    • 16 Million transistors (Ultra Sparc III)
    • 42 Million transistors, 2 GHz clock (Intel Xeon) – 2001
    • 55 Million transistors, 3 GHz, 130nm technology, 250mm2 die (Intel Pentium 4) - 2004
    • 140 Million transistor (HP PA-8500)

26 of 31

Processor Performance Increase

SUN-4/260

MIPS M/120

MIPS M2000

IBM RS6000

HP 9000/750

DEC AXP/500

IBM POWER 100

DEC Alpha 4/266

DEC Alpha 5/500

DEC Alpha 21264/600

DEC Alpha 5/300

DEC Alpha 21264A/667

Intel Xeon/2000

Intel Pentium 4/3000

27 of 31

Trend: Microprocessor Capacity

CMOS improvements:

  • Die size: 2X every 3 yrs
  • Line width: halve / 7 yrs

Itanium II: 241 million

Pentium 4: 55 million

Alpha 21264: 15 million

Pentium Pro: 5.5 million

PowerPC 620: 6.9 million

Alpha 21164: 9.3 million

Sparc Ultra: 5.2 million

Moore’s Law

28 of 31

Moore’s Law

  • “Cramming More Components onto Integrated Circuits”
    • Gordon Moore, Electronics, 1965
  • # of transistors per cost-effective integrated circuit doubles every 18 months

“Transistor capacity doubles every 18-24 months”

Speed 2x / 1.5 years (since ‘85); �100X performance in last decade

29 of 31

Trend: Microprocessor Performance

30 of 31

Memory

  • Dynamic Random Access Memory (DRAM)
    • The choice for main memory
    • Volatile (contents go away when power is lost)
    • Fast
    • Relatively small
    • DRAM capacity: 2x / 2 years (since ‘96); �64x size improvement in last decade
  • Static Random Access Memory (SRAM)
    • The choice for cache
    • Much faster than DRAM, but less dense and more costly
  • Magnetic disks
    • The choice for secondary memory
    • Non-volatile
    • Slower
    • Relatively large
    • Capacity: 2x / 1 year (since ‘97)�250X size in last decade
  • Solid state (Flash) memory
    • The choice for embedded computers
    • Non-volatile

31 of 31

Memory

  • Optical disks
    • Removable, therefore very large
    • Slower than disks
  • Magnetic tape
    • Even slower
    • Sequential (non-random) access
    • The choice for archival