|Timestamp||What was your process in determining those metrics and why?||What metrics do you measure or track?||Are there any metrics that you would like to track, but are unable to?||Why are you unable to track the item(s) above?||What tools or methods do you use to collect data?||Of your tools and/or methods, which has the most success and why?||How do you store or share your data internally?||Do you share your metrics or analysis with your customers or users?||Name and/or Department (Optional)|
|3/1/2012 8:38:07||We look at gathering data as a means to help us tell a story to assist leaders at all levels make better decisions on how to allocate resources.||The items we track depends on the processes we are trying to understand better. |
A few specific things we track -
1 - the types of tickets created in the Help Desk Ticketing system
2 - the software used on campus through Labstats
3 - average first response time to a ticket (We track this as we can control how fast we take first action while the time it takes to close a ticket is dependent on too many other variables for us to use that as a reliable metric)
4 - Satisfaction with request resolution - at the completion of a help desk ticket users are presented with a survey this is a feature within our ticketing system (Web Help Desk) and our remote assistance tool (Bomgar).
5 - On the network side we are tracking more technical data on speeds, up times, etc.
|Time chasing unicorns? |
I use this term to mean the energy it takes to clarify and respond to requests from individuals which are not within the realm of possibility given current state of technology and available resources.
Chasing unicorns is a tricky slope as one can rarely meet the request. Still it is a great opportunity to build a relationship with and understanding of community members which is a positive out come. The trick is how to apply enough resources to get the outcome without sacrificing too many of our scarcest resources (human hours).
|Seems to be a subjective task - what I qualify a unicorn another would qualify as a show horse in a party hat.||Tick marks on paper, Help Desk Ticketing system,||Ease of use on user part and on our part is typically provides information that we find most accurate as it is most representative.||Email, Verbally, Collaboration Tool, Blog that emails a set of folk depending on task||Yes||James Fadden - Allegheny College|
|3/2/2012 8:33:17||Varied. One was the creation of the Scorecard in answer to an Executive's Root Question:|
1. How are we doing? (what is the health of our services) and 2. How do you know? The main constant we attempt to use are root questions.
|A slew of them. From Service Scorecards (9 key services - Effectiveness measures of Delivery, Usage, and Customer Satisfaction), to Project Delivery, to Executive Dashboards, to departmental dashboards. We also track progress on specific goals (like removing sensitive data from all of our computers)||No. There are "measures" and "data" we don't have that we would have liked to have, but the cost was not worth the expected return. We have been able to find "suitable substitutes" for the ones we couldn't get, allowing us to produce Metrics.||Too expensive. Too expensive in person-hours to collect mostly. Sometimes it's that it would require a tool(set) we don't have and don't feel it worth the investment to purchase.||Automated (like automated call tracking), Databases, pen and paper, spreadsheets.||:-) The "best" is usually without human intervention - and has been tested to ensure accurate data.||Email, Verbally, Collaboration Tool, Dashboard, Shared Storage Space, Physical Report, too vague...should ask two questions||Yes||Marty Klubeck|
University of Notre Dame
|3/5/2012 16:09:45||Basic web analytics haven't changed much in the past 5 years, so even though the tools to collect and analyze has improved, the importance of specific areas has not. |
New tools and areas for measurement fall mostly in the realm of social media and new media. These tools are improving every day, especially those built in to Facebook, Twitter, YouTube and iTunes U.
|Browser versions, visitor locations, daily visitors, time on site, downloads, number of fans, amount of interaction, number of @replies||Twitter is lacking in built-in tools, so we do use some external services for this, but those work fairly well. iTunes U provides us raw data without any analytics. So, we have the ability to track iTunes U, but not the resources to do so.||Not knowing/having a tool to analyze raw data, especially for iTunes U.||Google Analytics, built-in tools for YouTube, Facebook, TweetDeck, iTunes U, Klout||- Google Analytics - really gets us a great deal of information on a free platform|
- Facebook & YouTube analytics are wonderful
|Verbally, Physical Report||Yes||Web Communications, Marketing & New Media|
|3/5/2012 16:26:52||Curiosity, re-design notes, standards, identify trends of our users||Site analytics, time on site, visits, unique visitors, demographics||Metrics regarding hashtags. Lots of paid services to do this, but none offered via google analytics or other free social media metrics solutions. Use hastracking.com currently, but it doesn't seem to be extensive.||Have not found the best solution.||Facebook insights, Twitalyzer, Twtrland, Klout, Google analytics, wildfire, hashtracking||Google analytics and Klout because they are easy to navigate and give all necessary info.||Email, Verbally, Collaboration Tool, Dashboard, Physical Report||Yes||Cassandra Ketrick/OU Web Communications|
|3/5/2012 16:54:14||Industry standards, built-in tools||new vs. returning visitors, visits, pages per visit, average time on site, bounce rate, browser versions, mobile devices and browsers, top entrance sources, keywords, pageviews, unique pageviews, visitor location (country, state)||We can track just about everything we need.||n/a||Google Analytics, Facebook Insights||Google Analytics is very in depth and also easy to use.||Email, Verbally, Collaboration Tool||Yes||OU Web Communications|
|3/6/2012 8:10:58||We were once asked the simple 4 questions and thought - that makes sense. So from that we were able to use our internal tool of webcheckout and create some scripted reports to ensure we were getting up to date information. |
we of course then take this data and put it into an annual report to compare and see if any trending is happening etc.
|At CMU, we collect a lot of data for the Academic Technology Services group. We also look at it from answering the following: Who? How many? How? Where|
From a classroom use/technology use standpoint we collect:
Who? the number of courses and number of faculty that were in the room
How many? How many actual course sessions were held in the rooms
How? Top video and audio source technologies used in the room
Where? Specifics to the actual rooms that have technology controlled by us.
who? How many faculty called/wrote with a problem that we had to address and how many used our lending services
How many? How many were scheduled support calls to physically meet the instructor
How? Breakdown of demos and followups to troublecalls
Where? The top rooms that had the most scheduled meetings with us
We do events as well (AV events and Media Production events) and Videoconferencing and we calculate the same way answering the questions Who? How many? How? and Where?
|I think there are always metrics we would love to track but are so not as concrete. We'd love to know the same information for departmental rooms that we have collaborated with to see if they find any similarities in our rooms or maybe bigger what are the differences with support etc.|
We'd also like to track more closely our random requests for technology but that doesn't go anywhere but an email folder so its very MANUAL. IF we could figure out a way to collect it in our current system without corrupting our current info that would save a lot of time.
|As I stated, the part of email is just the fall back because we can't put in requests that we've had to turn down without compromising our data. (the way the system itself works is the problem)|
I also think that some departments dont track departments as closely as we do so its difficult for them to mimic our data. (plus time consuming on their part)
|As stated earlier, the best method is our resource mgmt system Webcheckout. That has the greatest amt of info but emails and logs on servers etc are also used of course.||If you have a main system like we do, the most successful is that IF you have buyin from all staff to use it. We tend to miss a small percentage because it isnt entered into the system properly or perhaps wasnt entered and that does cause an issue but we account for it.||Email, Verbally, Collaboration Tool, Physical Report, Server with realtime reporting||Yes||Carnegie Mellon University|
|3/9/2012 13:03:24||Some were included or built-in to the tools used (Service Now for Incident Management). HDI (thinkhdi.com) is a third party solution that provides our Customer Satisfaction Index in comparison to other Higher Education institutions. And as requested or asked for.||Service and Support metrics - Number of Incidents; Types of Incidents; Customer Areas and Volume; Customer Satisfaction Index|
Financial Metrics - Monthly Reports; Spendings; Budgets; Pinnacle charges
Infrastructure/Systems - Service Facts; usage and volume; System Health
|First call resolution to see what types and how many issues can be solved at the first response. Problem Management to track all issues related to specific software or products supported on campus.||Difficult to collect data (phone/email requests) and to verify accuracy in confirming that resolution occurred during first response. Currently it is difficult to track and verify accuracy of incidents related to specific software. Process has not been instituted to allow for consistency towards categorization at case creation.||Service Now Ticketing System - HDI - Surveys - Interviews||There are challenges that exist within each method, we try to use our best judgement to interpret the results.||Email, Verbally, Physical Report||Yes||Chris Jones OU Health Sciences Center Campus|
|3/13/2012 8:20:38||Staff determined what we needed to know and why in order to start a process of collecting data and using it for informed decision-making||HelpDesk-Calls per day/week/month; resolution/closed; open tickets/duration;customer satisfaction|
Network-bandwidth utilization (inbound/outbound) percentage utilized; network uptime; server uptime
Enterprise Resources Planning-license utilization; problem resolution
Print Services-#of jobs run, on specific machines; types of jobs run; revenue generated
|Network-Traffic prioritization;||Management tools for the network lacking and costs to obtain and set up. Not good reasons but we are rebuilding and have basic infrastructure needs to revitalize.||mostly open source or inexpensive network tools from Juniper|
|time consuming but manual logs have given uys the most information with which to make informed decisions.||Shared Storage Space||Yes|
|3/13/2012 13:08:37||Most of our metrics and measures were chosen to help answer critical success factors (CSFs) around our core services which include Email, Network, Telephony, Paging and Service Desk. Analysis teams were set up to establish the CSFs and Key Performance Indicators. For our metrics, we have set targets based on industry standards or what can be achieved currently. The analysis teams meet monthly to review the outcomes and make recommendations for improvement. |
Measures were chosen to help in strategic planning, provide direction, evaluate current and new architectures and demonstrate value.
|Service Availability - Currently reporting on email (Exchange, BES, DukeMail, SunMail, Send/Receive), paging, VoIP, VoiceMail, and contact center and agents. We don't report on many of our services due to the manual effort involved in capturing and calculating this metric. We are in the process of setting up a Service Dashboard in Spectrum that will allow us to automate this process. |
Quality/Performance - Service Desk (% of calls answered in less than 30 seconds, % first call/contact resolution, average speed of answer, customer satisfaction), Communications Center (Abandoned Call Rate), Network (Number of alerts per day by host - used to calculate daily average and 95th percentile so that we can report and investigate daily those that had greater than their 95th percentile).
Additional Measures - # email accounts by application and affiliation, # spam email, # blocked email, # virus email, DHCP leases (wired and wireless), # DNS queries, internal and external bandwidth, # VPN connections, # guest wireless sessions, service desk calls handled, service desk created tickets, service desk resolved cases, # computer lab logins, # software downloads, postal services (amount sorted mail, metered mail, retail station revenue), # pages for our Paging service, # VoIP lines, # voicemail mailboxes, # post incident reviews by service, # change requests by service, HR turnover, # employees by job grade/ management vs. staff/ gender/ethnicity, security (# inbound and outbound blocked attacks, top vulnerabilities).
|Latency, SLA achievement, overall customer satisfaction for OIT (not just Service Desk)||We haven't established SLAs for many of our services. We currently do not have an OIT-wide survey. We only recently purchased Optnet which can provide insight into system performance.||website (service owner entered data)|
scripts that run on a monthly basis and send data via email
scripts that run on a daily basis, pull data from Nagios and Spectrum and update a database
some service owners complete their own reports and send them to be published on our metrics wiki.
manually retrieved from cacti and manually entered into Excel (we are working on automating the collection of data from cacti logs and automating the upload into a database.
manual email from service owners.
data sent from third party vendors.
|The most successful has been scripts that run on a daily/monthly basis to pull information from logs or tables and update a central metrics database. This is most successful because it is timely, accurate, effortless, and can easily be associated with other data in the database.||Verbally, Dashboard, Shared Storage Space, Physical Report||Yes||Susan Lynge|
Senior Metrics Analyst
|3/14/2012 6:54:45||used goal-question-metric paradigm as well as executive input||Application Software Delivery and Maintenance; User Services, Customer Satisfaction||source of bugs metrics - at what stage was the bug caused (regardless of when discovered) and why||not tracking root cause nor root stage||jira, ms project, footprints||jira - easy to use, techies like it better||Email, Collaboration Tool, Dashboard||Yes||Julienne VanDerZiel - ITS Director, UT Austin|