SRL01:
Performance
Factors
There are many factors that affects the speed a computer can run out. The main one to remember are:
2
Clock speed
The control unit (CU) uses a clock to measure a single cycle of fetching, decoding and executing an address. Each cycle is referred to as a herzt (Hz) with 1GHZ being 1 billion cycles a second
We can use a method called overclocking to increase the speed of clock past it standard speed. and therefore do more cycles per seconds. The downside of overclocking is it produces more heat so a good cooling system is needed or it can have a negative effect.
3
Standard
Overclocked
Number of cores
Most modern computers have dual or quad core meaning 2 or 4 processors. Sadly this does not mean 2x or 4x the speed.
You only get the speed out of the processor if the program is written to make use of the extra cores efficiently. Also different cores might be waiting on the same instruction which can also reduce the efficiency.
4
Image of a quad core processor
Increasing RAM
With more RAM, more of the program instructions can be loaded and there is less need to keep swapping data in and out to the swap file on the hard disk drive (virtual memory)
How much extra RAM we could add to a device depends on the operating system, the motherboard and the processor.
On a standard device your limit is doubling your RAM.
5
Cache
Cache is a very small amount of memory stored in the processor. The memory data register can hold only one value. So if a value is needed that it had a couple of cycles previous is needed again it must be fetched again. By introducing cache we can store previous values. The cache is then checked first before fetching from RAM. The larger the cache the more data stored.
6
Bus size
Busses on the processor (address,data,control) carry information between the CPU and RAM. A processor can have a small number of parallel wires (4 wires = 4 bits) or a very large number of parallel wires (64 bits = 64 bits).
This means we can send larger addresses and read larger data from the CPU and RAM
7
Pipelining
Pipelining is the process of fetch the next instruction while the first instruction is getting decoded. This means wants the first instruction is executed the second is already decoded and the third is already fetched.
8
Step | Fetch | Decode | Execute |
Step 1 | Instruction 1 | | |
Step 2 | Instruction 2 | Instruction 1 | |
Step 3 | Instruction 3 | Instruction 2 | Instruction 1 |
Step 4 | Instruction 4 | Instruction 3 | Instruction 2 |
Pipelining (drawbacks)
The processor only knows exactly which instruction to fetch next at the end of a cycle (after executing). This can not be done in pipelining so the next instruction to be fetched is instead predicted. If the wrong instruction is fetched, the instruction has to be thrown away and the correct one needs to fetch instead (this is called flushing the pipe). If this happens to often the benefits of pipelining are lost.
9
Step | Fetch | Decode | Execute |
Step 1 | Instruction 1 | | |
Step 2 | Instruction 2 | Instruction 1 | |
Step 3 | Instruction 3 | Instruction 2 | Instruction 1 |
Step 4 | Instruction 4 | Instruction 3 | Instruction 2 |