SQLT
Rudaiba Adnin
Contents
SQLT
SQLT
SQLT Installation
SQLT Installation
SQL> EXEC sqltxadmin.sqlt$a.set_param (<param_name>,param_value);
SQLTXTRACT vs SQLXECUTE
SQLTXTRACT is the easiest sqlt report to creation method because it does not require the execution of the SQL at the time of the report generation.
SQLTXECUTE, on the other hand, executes the SQL statement and thus has better runtime information and access to the actual rows returned.
SQLTXECUTE will get more information, but it is not always possible to use this method, perhaps because we are in a production environment or perhaps the SQL statement is currently taking three days to run, which is why we are investigating this in the first place. Another reason for not running SQLTXECUTE for some SQL statements is if they are DML (insert, update, delete, or merge), they will change data.
SQLT-Report Generation
SQLT-Report Generation
SQLT-Report Generation
SQLT Main Report
From the header page of SQLT main report we get an idea of what is available and how SQLT works, in terms of its navigation. The main report covers all sorts of aspects of the system. There’s a lot of information in the various sections. Related hyperlinks are grouped together.
SELECT DISTINCT
l.first_name || ' ' || l.last_name "Full Name", p.name "Profile Name", a.name "Super Role", c.name, cl.first_name || ' ' || cl.
last_name "Client Name"
FROM
prod7.profile p
JOIN prod7.acl_role ar ON p.acl_id = ar.acl_id
JOIN prod7.acl a ON ar.acl_id = a.id
JOIN prod7.login l ON l.id = p.login_id
JOIN prod7.profile_caseloads pc ON pc.profile_id = p.id
JOIN prod7.caseload c ON c.id = pc.caseload_id
JOIN prod7.caseload_clients cc ON c.id = cc.caseload_id
JOIN prod7.client cl ON cc.client_id = cl.id
WHERE
l.prov_id = 1
ORDER BY
l.first_name || ' ' || l.last_name
SQLT Main Report
So where do we go from the main page?
If we are absolutely convinced that the execution plan is wrong, you might go straight to “Execution Plans” and look at the history of the execution plans.
Suppose we think there is a general slowdown on the system. Then you might want to look at the “Observations” section of the report.
Maybe something happened to your statistics, so you’ll certainly need to look at the “Statistics” section of the report under “Tables.”
The idea is to build up a picture of your SQL statement, understand the statistics related to the query, understand the cost-based optimizer (CBO) environment, and try and get into its “head.” Why did it do what it did? Why does it not relate to what we think it ought to do? The SQLT report is the explanation from the optimizer telling us why it decided to do what it did.
SQLT Main Report
SQLT Main Report
For example, after clicking on Indexes (the last link under the Tables heading), we would see the page shown in Figure. SQLT collected all the information it could find and cross-referenced it all.
Under the Indexes column of the Indexes heading, the numbers are hyperlinked. We click on 6 of the CLIENT$ record.
SQLT Main Report
Now we can see the details of the columns in that table. As an
example here we see that the Index CLIENT_ID_NAMES_I was used in the execution of my query (the In Plan column value is set to TRUE).
SQLT Main Report
Now, in the Index Meta column, clicking on the Meta hyperlink for the CLIENT_ID_NAMES_I index display the index metadata.
SQLT Main Report
We click on the “Execution Plans” hyperlink at the top of the SQLT report to display execution plan. Just by hovering our mouse over the object, we get its owner, the query block name, when it was last analyzed, and how big the object is.
Now let’s look at the “Go To” column. We click on the “+” under that column. So right from the execution plan we can go to the “Col Statistics” or the “Histograms” or many other things.
Clicking on the “+” under the “More” column for step 3. We see the filter predicates and projection.
SQLT Main Report
Now, if we go to Table Columns and click on the Column Stats value (Here, 2) of ACL_ROLE, we see Selectivity is 0.000023, and cardinality is 34.
This translates to “I expect to get 34 row back for this equality predicate, which is equivalent to a 0.000023 chance (1 is certainty 0 is impossible) or in percentage terms I’ll get 00.0023 percent of the entire table.”
Notice that as the cardinality increases the selectivity also increases. The selectivity only varies between 0 and 1 (or if you prefer 0 percent and 100 percent), and cardinality should only vary between 0 and the total number of rows in the table (excluding nulls). Here, ‘should’ because these values are based on statistics.
AWR (Automatic Workload Repository)
AWR and SQLT are different blades on the same Utility toolkit.
One for tuning SQL and one for tuning the system.
The Golden Rule of Tuning
Get the AWR and review that first
As a tuner, looking at the overall system performance is the first step before trying to tune an individual SQL.
The problem with the SQL can be because of the environment
( little memory, insufficient CPU
resource, slow disks, other competing SQLs etc.)
AWR - Report Generation
awrrpt.sql – General Report
awrrpti.sql – General Report for an instance
awrsqrpt.sql – General Report for a SQL ID
awrsqrpi.sql – General Report for a SQL ID on an Instance
awrddrpt.sql – Difference Report
awrddrpi.sql – Difference Report a specific Instance
SQL> @awrrpt.sql
AWR - Report Generation
AWR (Automatic Workload Repository)
We see the name of the Database, the instance, the version, whether this is RAC or non-RAC where the database is hosted, the amount of physical RAM on the system, and the time period of
the snapshot.
The most important thing to look for is the time start and time end of the report.
If it’s too long you may not be able to see the signal of whatever you are looking for. If too short you may have missed whatever you are looking for.
AWR (Automatic Workload Repository)
Here, we are looking for Transactions per second. Is it busy?
0.4 transactions per second translates to an average of 4 transaction per 10 seconds.
Redo is small also.
AWR (Automatic Workload Repository)
Look at the Buffer Hit%; is it near a 100% – if it is, probably the Buffer access for queries is not a problem.
Is Execute to Parse near 100% – if not then there might be some Shared Pool problems or some Parsing problem.
In the example, we have 64% Execute to Parse; this is a sign of an application that could be using too many literals in SQLs or a Shared Pool that might be undersized, or some other Shared Pool problem like fragmentation.
AWR (Automatic Workload Repository)
We see another AWR report, and again we see the “Instance Efficiency” section. The “Parse CPU to Parse Elapsed %” metric is far worse at 3.22% instead of 41.97%.
This report is indicative of something much worse going on in the Shared Pool. Perhaps a higher rate of Parsing, perhaps a complex SQL that is popular on the system is taking a long time to parse. These kinds of metrics are not directly indicative of problems, but they point to something that might be a problem. We should be on the lookout for SQL that is filled with literals or some application firing system generated SQL at a high rate. These are warning signs, and this gives us a clue for what to look for next in the Top 10 waits.
AWR (Automatic Workload Repository)
The top 3 waits in this section (the other waits are less than 5% so they probably can be ignored). CPU is 44.1% – is this good? Parsing is a high CPU activity, especially Hard Parsing, so we probably should expect high CPU usage anyway. Should the fact that we see high CPU usage cause us to start tuning? That probably depends on if CPU is a valuable resource. If this system had 98% spare CPU then, if we want to tune this system, we should look at ways to improve the parsing overhead, either by reducing the number of individual SQLs or by reducing the number of literals; or if neither of those can be done then leave the CPU to do its job.
AWR (Automatic Workload Repository)
Take a quick look at the Average Wait column (in milliseconds) – 3337ms and 2363ms. These waits are over 1 second, 3 seconds, and 2 seconds (to the nearest second). A “Library cache lock” of 3 seconds is way too long. This is something that needs to be fixed, especially since there are 2,807 of them. Sometimes we will see a long wait and the count is 1 or 2, in which case we might conclude that it was one off and can also be ignored.
AWR (Automatic Workload Repository)
If CPU is high in the Top waits yet you have 97% idle CPU, it means our CPU is not overloaded, so we have more if needed and the system is dealing with the Workload with CPU, which is
a good situation to be in. It means if we want to improve the system, your options are to improve the system
resources by improving the clock speed or adding more CPUs (if you have many processes). It means that if
we want the Elapsed Time for an SQL to improve we need to look at the SQL itself, which would of course
need we to get a SQLTXPLAIN for it.
AWR (Automatic Workload Repository)
Here, SQL execution is taking 90 above percentage of the resources. This is what we want. A high percentage of resources are being used by actual work. If this was low, say 70%, we might wonder where is 30% of my system going?
AWR (Automatic Workload Repository)
IDLE_TIME is moderate here.
If we see VM_OUT_BYTES with a non-zero value it means the system is spending time (CPU resource) trying to get things on and off disk to make up for the Memory problem. In this case CPU is trying to compensate for Paging.
AWR (Automatic Workload Repository)
One of the biggest skills in collecting a SQLT report is to know which SQL to investigate. Often as a DBA of a system we will have a good idea of what to look at.
The AWR report should be used to confirm our suspicions and be used to identify in what way
the SQL is bad.
The first question we should ask is, “What metric should I judge the SQL by?” After all, there
are so many.
AWR (Automatic Workload Repository)
Look at the section of the AWR report in Figure. Notice the SQLs’ by Elapsed Time. A zero in this column means that the SQL is still running, that is, it has not completed one execution yet.
It means that this one SQL is dominating the system and is still running at the end of the AWR snapshot. It means that if we could eliminate or tune this SQL you would hugely reduce the resource usage on the system. This is the time to call on SQLT.