1 of 21

Database Performance Metrics

Prof.manoj kumar padhi

2 of 21

Database Performance Metrics

  • Database Performance Metrics are the measurements used to evaluate how well a database system is performing.�They help determine the speed, efficiency, and reliability of database operations such as querying, updating, or retrieving data.
  • These metrics allow database administrators to identify bottlenecks, improve performance, and ensure the system can handle user workload smoothly.
  • Database performance metrics tell us how fast, how efficiently, and how reliably a database is working.

3 of 21

Common Performance Metrics

  1. Throughput
  2. Throughput is a performance metric that measures how many operations a database can process in a given amount of time.
  3. It tells us the capacity of the database to handle work.
  4. Throughput is the number of transactions, queries, or requests processed per second by the database system.
  5. Common units:
    • TPS → Transactions Per Second
    • QPS → Queries Per Second
    • OPS → Operations Per Second

4 of 21

Why Throughput is Important

Throughput helps us understand:

    • How much workload the database can handle
    • Whether the system can support many users
    • If the database performs well under heavy load
    • When we need to scale (increase resources)

Higher throughput = better performance.

5 of 21

  1. Latency / Response Time
  2. Latency (or Response Time) is the total time taken by the database to execute a query or complete a transaction and return the result to the user.
  3. It tells us how fast the database responds.

6 of 21

  1. Query Execution Time
  2. Query Execution Time is the amount of time the database takes to run a single SQL query from start to finish.
  3. This includes:
  4. Parsing the query
  5. Planning and optimizing
  6. Fetching data from memory or disk
  7. Returning the result

7 of 21

  1. CPU Usage
  2. CPU Usage refers to the percentage of the system’s CPU (processor) being used by the database to execute queries, process transactions, and handle background tasks.
  3. It shows how much “processing power” the database is consuming.
  4. Measured in
  5. Percentage (%) of total CPU capacity�Example: 40%, 75%, 95%

8 of 21

Why CPU Usage is Important

  • High CPU usage can indicate performance problems such as:
  • Too many complex queries running at the same time
  • Missing indexes causing full table scans
  • Inefficient SQL queries
  • High concurrency (many users accessing at once)

9 of 21

5. Memory Usage

  • Memory Usage refers to the amount of RAM (Random Access Memory) used by the database to store and process data temporarily.�This includes:
  • Buffer pool / cache
  • Query execution memory
  • Sorting and join memory
  • Temporary tables
  • Measured in
  • Megabytes (MB) or Gigabytes (GB)
  • Also shown as percentage (%) of total memory

10 of 21

Why Memory Usage is Important

  • Memory is crucial for database performance because:
  • ✅ High memory availability = faster performance
  • If enough RAM is available, the database can:
  • Cache frequently used data
  • Avoid disk reads
  • Execute queries faster
  • ❌ Low memory = slow queries

11 of 21

6. Disk I/O (Input/Output Operations)

  • Disk I/O refers to the number of read and write operations performed on the storage disk by the database.
  • It shows how often the database accesses the disk to:
  • Read data (SELECT queries)
  • Write data (INSERT, UPDATE, DELETE)
  • Store logs and temporary files

  • Measured in
  • IOPS → Input/Output operations per second
  • MB/s or GB/s → Data transfer rate
  • Disk read/write latency (ms)

12 of 21

Why Disk I/O is Important

  • Disk I/O is one of the main reasons a database becomes slow, because:
  • ❌ Disk is much slower than RAM
  • If the database cannot find data in memory (cache), it reads from disk, which takes more time.
  • ❌ High Disk I/O = slow performance
  • Longer query execution time
  • Increased latency
  • System bottlenecks under heavy load

13 of 21

7. Cache Hit Ratio

  • Cache Hit Ratio is the percentage of database requests that are served directly from memory (cache) instead of reading from the disk.
  • It shows how effectively the database is using its memory.

  • What is a Cache Hit?
  • Cache Hit: Data is found in memory → very fast
  • Cache Miss: Data is NOT in memory, so it must be read from disk → slower

14 of 21

8. Locking & Concurrency Metrics

  • Locking & Concurrency Metrics measure how well a database handles multiple users or processes accessing data at the same time.�They show how often locks occur, how long queries wait, and whether conflicts (deadlocks) happen.

15 of 21

Key Terms

1. Lock

A lock is placed on a row/table to prevent conflicting operations.�Example: When one transaction updates a row, other users may have to wait.

2. Lock Wait Time

The amount of time a query waits because another query is holding a lock.

3. Deadlock

A situation where two or more transactions wait for each other forever, and neither can continue.

16 of 21

9. Network Latency

  • Network Latency is the time it takes for data to travel between the client (user/application) and the database server over a network.
  • It measures how fast the database can send and receive data across the network.

  • Measured in
  • Milliseconds (ms)�Lower network latency = faster communication.

17 of 21

Why Network Latency Matters�

When the database server and application are far apart or when the network is slow:

    • Queries take longer
    • Transactions become slower
    • User experience becomes poor
    • Overall application speed decreases

Network latency especially affects:

    • Cloud databases
    • Distributed databases
    • Remote connections

18 of 21

10. Index Usage Metrics

  • Index Usage Metrics measure how effectively a database is using its indexes while executing queries.�They show whether queries are using indexes or performing slow full table scans.
  • Indexes improve performance by allowing the database to find data faster—similar to an index in a book.

19 of 21

Why Index Usage is Important

Good index usage:

    • Speeds up SELECT queries
    • Reduces CPU and disk I/O
    • Improves throughput
    • Lowers query execution time

Poor index usage:

    • Causes slow queries
    • Increases disk reads
    • Leads to full table scans
    • Wastes system resources

20 of 21

11. Error Rate

  • Error Rate is the measurement of how often database operations fail.�It shows the percentage or number of:
    • Failed queries
    • Failed transactions
    • Connection errors
    • Timeouts
    • Rollbacks
  • A high error rate indicates problems in the database or application.

21 of 21

Why Error Rate is Important

Monitoring error rate helps to:

    • Identify system instability
    • Detect faulty queries
    • Find connectivity problems
    • Catch timeouts and crashes
    • Ensure reliability and availability

A system with high error rate:

    • Slows down
    • Loses data or transactions
    • Gives poor user experience
    • May require immediate maintenance