1 of 32

SQLT

Rudaiba Adnin

2 of 32

Contents

  • Introduction to SQLT
  • SQLT Installation
  • SQLT Report
  • AWR Report

3 of 32

SQLT

  • Tool to diagnose SQL statements performing poorly
  • Collect SQL tuning diagnostics for one SQL statement
  • Over a hundred health-checks (HC) around the SQL statement
  • Invokes SQL Tuning Advisor (STA)
  • Includes Active Session History (ASH) and SQL Monitor Report
  • Automatic SQL tuning test case (TC) extraction
  • SQLT helps to expedite SQL tuning process

4 of 32

SQLT

  • SQLT is a set of packages and scripts that produces HTML-formatted reports, some SQL scripts, and some text files.
  • The entire collection of information is packaged in a zip file
  • There are just over a dozen packages and procedures (called “methods”) in SQLT
  • Oracle support engineers handle a huge number of tuning problems on a daily basis; problem is, the old methods of linear analysis are just too slow.
  • We need to see the big picture fast so we can zoom in on the detail and tell the customer what’s wrong.
  • As a result, Carlos Sierra, a support engineer at the time (NEW CARLOS ROLE) created SQLT.

5 of 32

SQLT Installation

  • Download the SQLT zip file appropriate for the specific environment.
  • Unzip the zip file to a suitable location.
  • Navigate to your “install” directory under the unzipped area (in my case it is /home/oracle/sqlt/install, but locations can be different).
  • Connect as sys, for example, sqlplus / as sysdba.
  • Make sure the database is running.
  • Run the sqcreate.sql script.
  • Select the default for the first option
  • Define Connection Identifier (ie: @orcl): @<hostname>:<port>/<sid> (MUST for remote installation)
  • Enter and confirm the password for SQLTXPLAIN.
  • Select the tablespace where the SQLTXPLAIN will keep its packages and data(in my case, USERS)

6 of 32

SQLT Installation

  • Select the temporary tablespace for the SQLTXPLAIN user (in my case, TEMP).
  • Then enter the username of the user in the database who will use SQLT packages to fix tuning problems. Typically this is the schema that runs the problematic SQL (in my case this is HR).
  • Then enter “T”, “D,” or “N.” This reflects your license level for the tuning and diagnostics packs.
  • The last message you see is “SQCREATE completed. Installation completed successfully.”
  • Parameters can be changed anytime after installation:

SQL> EXEC sqltxadmin.sqlt$a.set_param (<param_name>,param_value);

7 of 32

SQLTXTRACT vs SQLXECUTE

SQLTXTRACT is the easiest sqlt report to creation method because it does not require the execution of the SQL at the time of the report generation.

SQLTXECUTE, on the other hand, executes the SQL statement and thus has better runtime information and access to the actual rows returned.

SQLTXECUTE will get more information, but it is not always possible to use this method, perhaps because we are in a production environment or perhaps the SQL statement is currently taking three days to run, which is why we are investigating this in the first place. Another reason for not running SQLTXECUTE for some SQL statements is if they are DML (insert, update, delete, or merge), they will change data.

8 of 32

SQLT-Report Generation

  • Now change the directory to /home/oracle/sqlt/run. From here login to SQLPLUS as the target user.
  • Then enter the following SQL (this is going to be the statement we will tune):
  • SQL> select report_namefrom report_def;
  • Then get the SQL_ID value from the following SQL:
  • SQL> select sql_id from v$sqlarea where sql_text like 'select report_name from report_def%';
  • In my case the SQL_ID was fmnx9mnrgdy2x.
  • Now we execute our first SQLT tool sqltxtract from the target schema (in this case HR) with the following command:
  • SQL> @sqltxtract fmnx9mnrgdy2x

9 of 32

SQLT-Report Generation

  • Enter the password for SQLTXPLAIN (which you entered during the installation).
  • The last message we will see if all goes well is SQLTXTRACT completed”.
  • Now create a zip directory under the run directory and copy the zip file created into the zip directory. Unzip it.
  • Finally from a browser navigate to and open the file named sqlt_ s<nnnnn>_main.html.

10 of 32

SQLT-Report Generation

11 of 32

SQLT Main Report

From the header page of SQLT main report we get an idea of what is available and how SQLT works, in terms of its navigation. The main report covers all sorts of aspects of the system. There’s a lot of information in the various sections. Related hyperlinks are grouped together.

SELECT DISTINCT

l.first_name || ' ' || l.last_name "Full Name", p.name "Profile Name", a.name "Super Role", c.name, cl.first_name || ' ' || cl.

last_name "Client Name"

FROM

prod7.profile p

JOIN prod7.acl_role ar ON p.acl_id = ar.acl_id

JOIN prod7.acl a ON ar.acl_id = a.id

JOIN prod7.login l ON l.id = p.login_id

JOIN prod7.profile_caseloads pc ON pc.profile_id = p.id

JOIN prod7.caseload c ON c.id = pc.caseload_id

JOIN prod7.caseload_clients cc ON c.id = cc.caseload_id

JOIN prod7.client cl ON cc.client_id = cl.id

WHERE

l.prov_id = 1

ORDER BY

l.first_name || ' ' || l.last_name

12 of 32

SQLT Main Report

So where do we go from the main page?

If we are absolutely convinced that the execution plan is wrong, you might go straight to “Execution Plans” and look at the history of the execution plans.

Suppose we think there is a general slowdown on the system. Then you might want to look at the “Observations” section of the report.

Maybe something happened to your statistics, so you’ll certainly need to look at the “Statistics” section of the report under “Tables.”

The idea is to build up a picture of your SQL statement, understand the statistics related to the query, understand the cost-based optimizer (CBO) environment, and try and get into its “head.” Why did it do what it did? Why does it not relate to what we think it ought to do? The SQLT report is the explanation from the optimizer telling us why it decided to do what it did.

13 of 32

SQLT Main Report

14 of 32

SQLT Main Report

For example, after clicking on Indexes (the last link under the Tables heading), we would see the page shown in Figure. SQLT collected all the information it could find and cross-referenced it all.

Under the Indexes column of the Indexes heading, the numbers are hyperlinked. We click on 6 of the CLIENT$ record.

15 of 32

SQLT Main Report

Now we can see the details of the columns in that table. As an

example here we see that the Index CLIENT_ID_NAMES_I was used in the execution of my query (the In Plan column value is set to TRUE).

16 of 32

SQLT Main Report

Now, in the Index Meta column, clicking on the Meta hyperlink for the CLIENT_ID_NAMES_I index display the index metadata.

17 of 32

SQLT Main Report

We click on the “Execution Plans” hyperlink at the top of the SQLT report to display execution plan. Just by hovering our mouse over the object, we get its owner, the query block name, when it was last analyzed, and how big the object is.

Now let’s look at the “Go To” column. We click on the “+” under that column. So right from the execution plan we can go to the “Col Statistics” or the “Histograms” or many other things.

Clicking on the “+” under the “More” column for step 3. We see the filter predicates and projection.

18 of 32

SQLT Main Report

Now, if we go to Table Columns and click on the Column Stats value (Here, 2) of ACL_ROLE, we see Selectivity is 0.000023, and cardinality is 34.

This translates to “I expect to get 34 row back for this equality predicate, which is equivalent to a 0.000023 chance (1 is certainty 0 is impossible) or in percentage terms I’ll get 00.0023 percent of the entire table.”

Notice that as the cardinality increases the selectivity also increases. The selectivity only varies between 0 and 1 (or if you prefer 0 percent and 100 percent), and cardinality should only vary between 0 and the total number of rows in the table (excluding nulls). Here, ‘should’ because these values are based on statistics.

19 of 32

AWR (Automatic Workload Repository)

AWR and SQLT are different blades on the same Utility toolkit.

One for tuning SQL and one for tuning the system.

The Golden Rule of Tuning

Get the AWR and review that first

As a tuner, looking at the overall system performance is the first step before trying to tune an individual SQL.

The problem with the SQL can be because of the environment

( little memory, insufficient CPU

resource, slow disks, other competing SQLs etc.)

20 of 32

AWR - Report Generation

  • Go to the directory of the database containing the following sql files (In most cases, Oracle_Home/rdbms/admin)

awrrpt.sql – General Report

awrrpti.sql – General Report for an instance

awrsqrpt.sql – General Report for a SQL ID

awrsqrpi.sql – General Report for a SQL ID on an Instance

awrddrpt.sql – Difference Report

awrddrpi.sql – Difference Report a specific Instance

  • Run awrrpt.sql to get general report of awr.

SQL> @awrrpt.sql

  • Enter start and end snap ids from the snap id list.
  • A html file gets generated. Open the file.

21 of 32

AWR - Report Generation

22 of 32

AWR (Automatic Workload Repository)

We see the name of the Database, the instance, the version, whether this is RAC or non-RAC where the database is hosted, the amount of physical RAM on the system, and the time period of

the snapshot.

The most important thing to look for is the time start and time end of the report.

If it’s too long you may not be able to see the signal of whatever you are looking for. If too short you may have missed whatever you are looking for.

23 of 32

AWR (Automatic Workload Repository)

Here, we are looking for Transactions per second. Is it busy?

0.4 transactions per second translates to an average of 4 transaction per 10 seconds.

Redo is small also.

24 of 32

AWR (Automatic Workload Repository)

Look at the Buffer Hit%; is it near a 100% – if it is, probably the Buffer access for queries is not a problem.

Is Execute to Parse near 100% – if not then there might be some Shared Pool problems or some Parsing problem.

In the example, we have 64% Execute to Parse; this is a sign of an application that could be using too many literals in SQLs or a Shared Pool that might be undersized, or some other Shared Pool problem like fragmentation.

25 of 32

AWR (Automatic Workload Repository)

We see another AWR report, and again we see the “Instance Efficiency” section. The “Parse CPU to Parse Elapsed %” metric is far worse at 3.22% instead of 41.97%.

This report is indicative of something much worse going on in the Shared Pool. Perhaps a higher rate of Parsing, perhaps a complex SQL that is popular on the system is taking a long time to parse. These kinds of metrics are not directly indicative of problems, but they point to something that might be a problem. We should be on the lookout for SQL that is filled with literals or some application firing system generated SQL at a high rate. These are warning signs, and this gives us a clue for what to look for next in the Top 10 waits.

26 of 32

AWR (Automatic Workload Repository)

The top 3 waits in this section (the other waits are less than 5% so they probably can be ignored). CPU is 44.1% – is this good? Parsing is a high CPU activity, especially Hard Parsing, so we probably should expect high CPU usage anyway. Should the fact that we see high CPU usage cause us to start tuning? That probably depends on if CPU is a valuable resource. If this system had 98% spare CPU then, if we want to tune this system, we should look at ways to improve the parsing overhead, either by reducing the number of individual SQLs or by reducing the number of literals; or if neither of those can be done then leave the CPU to do its job.

27 of 32

AWR (Automatic Workload Repository)

Take a quick look at the Average Wait column (in milliseconds) – 3337ms and 2363ms. These waits are over 1 second, 3 seconds, and 2 seconds (to the nearest second). A “Library cache lock” of 3 seconds is way too long. This is something that needs to be fixed, especially since there are 2,807 of them. Sometimes we will see a long wait and the count is 1 or 2, in which case we might conclude that it was one off and can also be ignored.

28 of 32

AWR (Automatic Workload Repository)

If CPU is high in the Top waits yet you have 97% idle CPU, it means our CPU is not overloaded, so we have more if needed and the system is dealing with the Workload with CPU, which is

a good situation to be in. It means if we want to improve the system, your options are to improve the system

resources by improving the clock speed or adding more CPUs (if you have many processes). It means that if

we want the Elapsed Time for an SQL to improve we need to look at the SQL itself, which would of course

need we to get a SQLTXPLAIN for it.

29 of 32

AWR (Automatic Workload Repository)

Here, SQL execution is taking 90 above percentage of the resources. This is what we want. A high percentage of resources are being used by actual work. If this was low, say 70%, we might wonder where is 30% of my system going?

30 of 32

AWR (Automatic Workload Repository)

IDLE_TIME is moderate here.

If we see VM_OUT_BYTES with a non-zero value it means the system is spending time (CPU resource) trying to get things on and off disk to make up for the Memory problem. In this case CPU is trying to compensate for Paging.

31 of 32

AWR (Automatic Workload Repository)

One of the biggest skills in collecting a SQLT report is to know which SQL to investigate. Often as a DBA of a system we will have a good idea of what to look at.

The AWR report should be used to confirm our suspicions and be used to identify in what way

the SQL is bad.

The first question we should ask is, “What metric should I judge the SQL by?” After all, there

are so many.

32 of 32

AWR (Automatic Workload Repository)

Look at the section of the AWR report in Figure. Notice the SQLs’ by Elapsed Time. A zero in this column means that the SQL is still running, that is, it has not completed one execution yet.

It means that this one SQL is dominating the system and is still running at the end of the AWR snapshot. It means that if we could eliminate or tune this SQL you would hugely reduce the resource usage on the system. This is the time to call on SQLT.