Index for my ServiceNow ITSM Blogs

Improving IT Operations 

The Future of IT, ITSM, Service Desk, and ITIL

Thinking Differently And The Need For IT Change

Enterprise Service Management

 Custom Apps


Social Media


My Forrester Blog Index – The End Of A Blog Roll

As I have posted my last blog to my Forrester blogroll I thought I would update my index of last August …

To view the blogs in chronoloical order please go to:


Practical ITSM Advice: Defining Availability For An IT Service

People In IT Love Stats But They Probably Won’t Love These

The Capita ITIL JV Wasn’t “Big News,” So What IS Important To Real-World IT Service Delivery?

So Capita Gets ITIL But Will People Finally “Get” ITIL?

ITSM Goodness: How To Up Your IT Service Management Game In 7 Steps

ITSM And The itSMF In Norway – Different In So Many Ways?

IT Service Management In 2013 – How Far Have We Come Since 2009?

Man Alive, It’s COBIT 5: How Are You Governing And Managing Enterprise IT?

The Cult Of ITIL: It Has More Followers Than You Think

ITSM, ITIL, And Enabling Tools In The Middle East

It’s Time To Realize That “ITIL Is Not The Only Fruit”

ITIL Adoption: 5 Steps That Can Help With Success

“We Need To Talk About ITIL”

ITIL Global Adoption Rates, Well At Least A Good Indication Of Where It Is At

ITIL: What Constitutes Success?

Top 20 (OK, 50) ITIL Adoption Mistakes

The Applicability Of ITIL Outside Of IT

What Next For ITIL?

2011: An ITIL Versioning Odyssey

Getting Started With ITIL – The 30-Minute Version

ITSM – Tools and Vendors

ITSM Tools: Is What You Pay Linked To Value?

The Importance Of Customer “Choice” In ITSM Tool Selection – “Hybrid ITSM”?

12 Tips For Moving From An On-Premises To SaaS ITSM Tool (From A Customer)

The Forrester SaaS ITSM Tool Market Overview: Who Is Where With What

Automation: Is It The Only Way For IT To Really “Do More With Less”?

“BMC You Later” — BMC Pushes The ITSM Tool Envelope With MyIT

More ITSM Tool Bells And Whistles, And Where The Real Focus Of Vendor Attention Should Be

50 Shards Of ITIL – The Bane And Pain Of ITSM Tool Selection

SaaS for ITSM: Getting Past The Hype

ITSM Tool Verification: A Good Or Bad Thing?

ServiceNow Finally Goes Public: Which Way Now?

BMC To Acquire Numara Software: A Few Thoughts From Your Favorite ITSM Analyst

Why Is Buying An ITSM Tool Like Buying A Car?

How Do You Value ITSM Tool Verification Or Certification Schemes?

ServiceNow Knowledge11: ITSM And Social Learning For Us All

Newsflash For The ITSM Community: “SaaS” Is A Red Herring

Sharing The ITSM And ITAM Goodness Of CA World: 20+ Presentations To Download

Are You Happy With Your ITSM Tool?

ITSM – People

Squeezing The Value Out Of ITIL, Or Any Other IT, Training

How Gremlins And Vanilla Ice Can Help Us Deliver Better IT Services

How Not To Make Friends And Influence People: A Personal Story Of Customer Experience At Its Worst . . . And What IT Can Learn

Staffing For IT Service Delivery Success: Think Employee, Think Customer, Then Repeat

Prepare Your People For The Future Of IT Service Delivery

A Killer Disease? IT’s Unhealthy Obsession With Itself

ITSM And ITIL Thinking – Brawn, Brains, Or Heart?

The ABC Of ICT – The Top 10 People Success Factors For ITSM

The ABC Of ICT – The Top 10 People Issues

ITSM – Service Catalog Getting A Service Catalog: So Much More Than Buying A Tool!

ITSM – Strategy & Futures (Cloud, BYOD, Mobility, Social, Automation)

IT? How about I&T?

ITSM in 2013 and Beyond: The Webinar Link And Audience Poll Results

The Top 10 IT Service Management Challenges For 2013 — But What Did You Achieve In 2012?

What’s Your ITSM Strategy (If You Actually Have One)?

ITSM In 2012: In The Words Of Marvin Gaye, “What’s Going On?”

ITSM AND Automation: Now That’s A Double Whammy Of Business-Enabling Goodness

Defining IT Service Management – Or Is That “Service Management”?

Enabling Customer Mobility: Why Current Mobile Device Management Thinking Is Flawed 

Social IT Support: Didn’t We Do This In The 1990s?

Are You Sleepwalking Through Twitter?

My 2011 Blog Of Blogs: Hopefully The “Important” ITSM, ITIL, People, ITAM, SAM, ITFM, Etc. Stuff

Top 10 ITSM Challenges For 2012: More Emphasis On The “Service” And The “Management”

Have You Considered BI for ITSM?

Social? Cloud? What About Mobile?

ITSM – Service Desk

Is Your IT Service Desk Customer Experience Up To Scratch?

What’s The Real Cost Of Poor IT Support And Shoddy Customer Service?

12 Pieces Of Advice For IT Service Desks – From A Customer!

Paging The IT Organization: You Need To Support The People Not The Technology

IT Support: IT Failure Impacts Business People and Business Performance. Comprendez?

How Not To Deal With IT Service Failure

What’s The Problem With Problem Management?

Benchmarking The IT Service Desk – Where Do You Stand?

Where Is All The Incident Classification Best Practice?

ITSM – Metrics

IT Service Management Benchmarks – For You By You

Is Customer Experience Important To Internal IT Organizations? With Free Statistics!

“We Do A Great Job In IT, Our Metrics Dashboard Is A Sea Of Green.” Really?

Where IT Metrics Go Wrong: 13 Issues To Avoid

Why Is IT Operations Like Pizza Delivery?

ITSM Metrics: Advice And 10 Top Tips


Giving Back To The ITSM Community: We Move, If Slowly, But With Purpose

From The Coal Face: Real World ITSM And ITIL Adoption Sound Bites

ITSM Practitioner Health Check: The ITSM Community Strikes Back

Giving Back To The IT Service Management Community

Support ITSM Tool Vendors That Support The ITSM Community


Software Asset Management in 2013: State Of SAM Survey Results

The Rise, Fall, And Rise Of Software Asset Management: It’s More Than Just A “Good Thing To Do”

Cover Your Assets; Use IT Asset Life-Cycle Management To Control IT Costs

Software Asset Management Part Deux – “Try Harder”


Warning: Your Journey To “Demonstrating IT-Delivered Value” Passes Through The Quaint Little Town Of “Understanding IT Costs”

Five Steps To Improve Your IT Financial Management Maturity

“Run IT As A Business?” Do You Really Know What This Means?

IT Value, Like Beauty, Is In The Eye Of The Beholder

DevOps Will It Be “DevOps” Or “DevOid” For I&O Professionals?

Supplier Management

5 Tips For Getting Ready For Service Integration

A Late New Year’s Resolution: Be Nice To A Supplier And See What Happens

Understanding IT Costs

The following is an extract from the 2009 OVUM Butler Group Managing Costs in IT Strategy Report …


In-house IT functions are increasingly tasked to both justify their expenditure on ‘keeping the lights on’ activities and demonstrate the value they provide to the organisation.


Business-as-usual costs are not inconsiderable, usually some three times the value of an organisation’s project-based IT spend, and hence IT functions must ensure that they understand how IT costs are driven and how changes in IT utilisation can affect both unit and total IT costs. The corporate focus on IT value and costs is often driven by the enterprise-wide mandate to ‘do more with less’ and by growing demands for compliance and governance-led transparency. Consequently, IT organisations need to understand, and closely control, the activities that drive IT costs and factors of demand and supply. It is only through the implementation of formal cost management activities that IT functions can deliver cost-effective IT service provision and maximise visibility into related cost structures.

  • IT organisations should leverage enterprise finance expertise to adopt a corporately accepted IT cost measurement.
  • IT and the business must agree the key IT cost metrics for both ongoing performance management and the assessment of service improvement opportunities.
  • An IT function should utilise benchmarking techniques to establish relative cost efficiency and identify opportunities for cost improvement.
  • IT managers need to understand their IT Financial Management processes’ relationships with, and importance to, other IT Service Management processes.


IT organisations should leverage enterprise finance expertise to adopt a corporately accepted IT cost measurement.

With IT expenditure having increased along with the corporate utilisation of, and dependency on, technology, there is now a requirement for more formal management control, methodologies, and particularly measurement of costs within the IT function. When faced with questions such as ‘what percentage of the IT budget is spent on operations and maintenance?’, or ‘what percentage of IT initiatives contribute directly to organisational goals?’, IT management must now be able to provide accurate answers in which they, senior executives, and stakeholders can have confidence.

The primary barrier to IT cost awareness is a lack of tools and methods for measuring IT costs and value, and presenting them from an organisational perspective. An IT management environment usually has plenty of tools for managing technology assets, methods for managing IT delivery, and frameworks to address IT and enterprise architecture, but very few that are capable of uniting the technology, enterprise, and financial aspects of IT. As a consequence IT often continues to be treated as a cost centre, rather than as a business unit, within many organisations.

To identify the costs of IT service provision, an IT organisation needs to create a framework within which all known costs can be recorded and allocated to specific IT services, customers, locations, or other activities. Only in building their IT ‘cost model’ is an IT function truly able to understand the costs incurred and how the costs are driven, and provide a robust foundation for IT chargeback. In creating a corporate IT cost model, IT should use enterprise finance expertise to systematically work up a logical overview of the IT costs incurred and associate these costs with the business’s chosen basis for cost allocation, e.g., by customer.

The cost identification process starts with the categorisation of relevant costs into cost types such as hardware costs, software costs, people costs, accommodation costs, external service costs, and transfer costs. Where external service costs are expenditure such as the procurement of an outsourced service, and transfer costs those that represent goods and services provided by other parts of the organisation. It is important not to miss the latter type of cost as they are both a ‘real’ cost to the organisation and part of the cost of providing the service. Cost types may also be broken down further, into cost elements, if more detail is required to apportion charges (particularly for service-based cost models).

The most common IT cost model, costs-by-customer, requires that these costs are then attributed to the customer that causes them (alternatively, a costs-by-service cost model attributes costs to the services that cause them). These will be direct costs, those clearly attributable to a single customer, and indirect costs, those incurred on behalf of two or more customers that need to be apportioned to multiple customers in an equitable manner. There is also a need to classify costs as either operational or capital expenditure – the distinction required to calculate the annual cost of a capital item as it depreciates over time.


IT and the business must agree the key IT cost metrics for both ongoing performance management and the assessment of service improvement opportunities.

As with any business function, there is a danger that a corporate entity that sets its own performance metrics in isolation, fails to deliver meaningful management information that can subsequently be used to add value to the business. Hence, IT functions need to create their cost-based metrics in conjunction with the business – ensuring that they are relevant and easily measurable; that there is a sufficient number, and type, to cover the breadth of IT service delivery; and that there is scope to improve on base-lined performance to deliver real cost and efficiency improvements.

There is no golden set of IT cost metrics – each organisation will have different strategic drivers, different goals, and different opinions on the metrics required for IT to demonstrate cost efficiency. There are, however, benefits to choosing metrics that can be compared across organisations and against industry standards. Such metrics, trended over time, will help identify both the achievement of efficient performance and opportunities for service (and cost) improvement.

Example cost- and efficiency-related metrics include:

  • IT costs as a percentage of total business operating costs.
  • The average IT cost per employee (or per user if numbers are radically different).
  • The cost of providing generic IT services, such as e-mail accounts, per user.
  • Per-seat application costs, by type.
  • PC and laptop Total Cost of Ownership (TCO).
  • Service Desk costs per user.
  • The average cost per incident handled by the Service Desk.
  • The percentage of incidents dealt with by first-line (Service Desk) operatives.
  • Service Level Management resource costs per IT service, by Service Level Agreement (SLA) type.
  • Percentage reduction in software costs through improved asset control.
  • Change Management resource costs per change, by change type.
  • The average time to diagnose and resolve (or provide a workaround for) problems.
  • Infrastructure utilisation percentages.
  • Percentage reduction in lost business productivity caused by capacity-related incidents.
  • Number, and consequential costs, of major incidents.

An IT function should utilise benchmarking techniques to establish relative cost efficiency and identify opportunities for cost improvement.

Technology plays a pivotal role in the running and evolution of most organisations, with IT systems now an integral part of the business environment. In the past, IT functions were just tasked with providing the required volumes of particular technologies and technology-based services. However, as IT costs have increased, businesses have demanded that the IT services provided by in-house, or outsourced, IT organisations demonstrate both efficient delivery and value for money. Many organisations now use benchmarking as one of their preferred methods of ensuring that the best possible value is being achieved from IT expenditure.

Benchmarking is the comparison of an organisation’s performance against standards of performance set in the enterprise’s sector, and other divisions in the same organisation, or by accepted leaders in the particular area being benchmarked. This is achieved by using standard measurements to compare performance with that of other organisations or industry benchmarks. Benchmarking activity can identify problem areas within the IT function, discover gaps in performance, or find where performance is below that of an organisation’s peers. Whilst it is important to identify areas for improvement, it is also valuable to learn how the better performing enterprises achieve improved effectiveness.

Benchmarking can provide a number of benefits. Most notably it can be a catalyst for improved organisational performance and deliverable quality. By identifying gaps in operational effectiveness, as compared with peers or leaders, more innovative ways of working can be enabled. Benchmarking can also lead to a marked improvement in the organisation’s ability to collect and analyse IT performance data as, before comparisons can be made, a good understanding of an organisation’s internal operation is required. If employed correctly, benchmarking can also lead to better collaboration between both internal personnel and other stakeholders.

At the end of 2008, a survey on ‘Business Improvement and Benchmarking’, conducted on behalf of the Global Benchmarking Network, reported a continued rise in benchmarking adoption. The 450 respondents, from over 40 countries, chose Informal Benchmarking as their third most used improvement tool (after Customer Surveys and SWOT Analysis). Additionally, the tools most likely to increase in popularity over the next three years were cited as Performance Benchmarking, Informal Benchmarking, SWOT Analysis, and Best Practice Benchmarking – with over 60% of organisations not currently using these tools likely to use them.

From an IT Service Management (ITSM) best practice perspective, IT Infrastructure Library (ITIL) v3 espouses the benefits of benchmarking within its portfolio of Continual Service Improvement activities. ITIL’s Service Portfolio Management process also recognises that, by understanding the cost structures applied in the provisioning of a service, an organisation is able to benchmark that service cost against other providers. Or IT financial information, together with service demand and internal capability information, can be used to support decisions as to whether a certain service should be provisioned internally or not. Finally, for IT organisations wishing to benchmark their service management capabilities against a formal standard, ISO/IEC 20000 provides such a formal framework for both audit and certification.

IT managers need to understand their IT Financial Management processes’ relationships with, and importance to, other IT Service Management processes.

IT Financial Management interacts with most ITSM processes but has particular dependencies upon, and responsibilities to: Service Level Management, Capacity Management, and Configuration Management.

Within Service Level Management, the Service Level Agreement (SLA) details both customer expectations and IT function obligations for the relevant IT Service(s). During the creation of the SLA, the potential costs incurred to deliver against customer requirements play a pivotal role in determining the eventual (agreed) parameters of service delivery. In an IT organisation with mature Financial Management processes, the Service Level Manager will liaise with IT Finance to understand the costs of meeting existing and new business requirements and how charging policies (if in place) can affect customer and user behaviour.

By utilising this information, the Service Level Manager is able to create an SLA that best fits both customer and corporate needs – matching service levels to affordability, and supply to demand with the corporately desired level of efficiency. It is worth noting, however, that whilst finance-enabled Service Level Management allows for greater customer variation to service levels (and the associated benefits) it also places greater demands on IT budgeting, accounting, and charging.

Capacity Management is charged with planning and controlling the IT capacity requirements of an organisation. Cost information is a vital input to this process and without it Capacity Managers are unable to accurately estimate the costs of desired capacity or availability for a given system or IT service – and changes in capacity requirements inevitably lead to changes in costs. Capacity information also influences costs. Unit costs may increase because capacity has to be increased for greater levels of resilience or unit costs may fall as a result of improved infrastructure utilisation, of purchasing newer (better value) technology, of economies of scale, or of increased purchasing power.

Configuration Management is the process of providing a logical model of the IT infrastructure by identifying, controlling, maintaining, and verifying the versions of all configuration items. The configuration information, stored within the Configuration Management Database (CMDB), is used in the majority of IT decision-making processes; this includes financial details derived from the budgeting, accounting, and charging processes. Conversely, given that the aim of IT Financial Management is the effective stewardship of IT assets and resources, it is imperative that information from Configuration Management, and in particular from the CMDB, is readily available to IT Finance.

Republished from

Effective Service Desk and Incident Management Metrics

Under the IT Infrastructure Library (ITIL) v3 Best Practice Framework, the service desk function and incident management process are closely intertwined within the service operations environment, with the efficient and effective operation of both the function and the process vital to the delivery of highly-available IT services. Consequently, their performance needs to be monitored and tightly managed to ensure that IT can deliver against critical success factors such as ‘Quickly Resolve Incidents’, ‘Maintain IT Service Quality’, and ‘Improve Business and IT Productivity’. Effective metrics are key to this and should be considered an IT organisation’s navigational compass on the proverbial journey to IT Service Management (ITSM).

Whilst nearly every organisation has, or has access to, a service desk for the reporting of incidents and logging of service requests, how many service desks are viewed as responsive and customer-focused (by the corporate users of IT services)? From an IT provider perspective, how do their service desk and incident management process perform against business requirements? Are incidents consistently resolved within SLA targets and with the required level of priority? Only a well-thought-out and flexible set of performance metrics can ensure that the service desk and incident management process are delivering value to the business.

As with many other corporate functions, IT management often espouses management rhetoric such as “if you don’t measure it, you can’t manage it”, “if you don’t measure it, you can’t improve it”, or “a process is not truly implemented until measured” but, whilst the sentiment is right, implemented metrics often end up being for the sake of having metrics rather than serving a practical purpose such as supporting process assurance and improvement, and informed decision making. Unfortunately, it is all too easy for IT functions to fall under the misconception that tracking metrics and beating targets is enough and for the utilisation of inappropriate metrics to adversely affect process and individual performance causing misalignment with IT, and ultimately business objectives.

So why does it go so wrong? The list of potential metric pitfalls is long but common factors include taking the easy option – basing metrics on easily accessible data (“what can we measure?” rather than “what should we measure?”) or simply using performance metrics that the corporate ITSM tool(s) can readily deliver. Or the reverse, overcomplicating matters such that it costs more to derive metric information than the benefits realised from its utilisation; potentially compounded by not having the right tool(s) to collect, report, and analyse the metric information. It is also easy to focus on quantity over quality, with too many metrics in play – the average service desk tracks more than twenty metrics – possibly a symptom of the ‘what can we measure’ approach.

Metric suitability and effectiveness is further eroded by being parochial (looking at particular subsets of activities rather than the whole) and by being inwardly focused on IT, rather than business, needs; potentially neglecting the fact that an inappropriate mix of metrics can adversely influence employee behaviour. A good, and oft quoted, example of this is the tension between two common service desk metrics – Average Call Handling Time and First Contact Resolution. Scoring highly against one metric can adversely impact the other, and the utilisation of just one of them (without a balancing measure) for performance measurement can be at best worthless, and at worst dangerous.

With just an Average Call Handling Time focus, service desk agents are encouraged to adopt a ‘quantity rather than quality’ approach – taking as many calls as possible with little emphasis on incident resolution – passing the majority of calls onto level 2 support. With just a First Contact Resolution focus, service desk operatives can be reticent to pass a call onto level 2 support and can spend an inappropriate amount of time trying to resolve an incident that is probably beyond their level of knowledge and expertise. In both instances, the metric has inappropriately driven employee behaviour at the expense of the user, Service Level Agreement (SLA) targets, and the continuity of business operations. So, when selecting a metric, an IT function should always consider the negative behaviours it might encourage.

Lack of understanding is another cause of metric misery. This can be a lack of understanding in one or more of the following areas: the need for metrics (at both an organisational and employee level), business needs and expectations, or what measured performance actually means in the context of business impact – potentially resulting in metrics that are not used, are inappropriate to the intended recipients, or are just not understood by the recipients. An IT function should not report metrics that do not contribute to management thinking and decision making.

With the above in mind, whilst there is no silver bullet in terms of a basket of service desk and incident management metrics that fits all IT functions, there are good practices that can be used to focus metric selection and utilisation for business benefit. The first is that metrics should be aligned with business requirements, dovetailing into Service Level Agreement (SLA) targets, with the ability to demonstrate both the value that IT adds to the business, and the business impact of improvements in IT delivery. Metrics should also be reported in context – a good example being that 99.9% availability looks great until the reader sees that the 0.1% nonavailability affected a business-critical process during a period of critical business activity.

Chosen metrics should not be viewed in isolation. IT functions should understand the correlation between metrics such as First Contact Resolution and Customer Satisfaction or First Contact Resolution and Service Desk Operative Utilisation. Metrics should also be viewed across time periods, with metric trends at least as important as static values, given that a persistently exceeded target when trended may show projected failure in the next six months as performance slowly declines. The core performance metrics can also be supplemented by more internal, trend-based ‘Top 10s’ that facilitate problem management activity such as ‘Top 10 used incident classifications’ or ‘Top 10 applications by incident volume’.

Metrics must provide a launch pad for improvement. With the ability to identify both IT and business opportunities such as improving service quality, cost reduction, increased customer satisfaction, people capability enhancement, or technical innovation. The adoption of industry standard metrics allows for the benchmarking of internal performance against industry standards or the service desks of other organisations, e.g. cost per call. Finally, IT organisations should not underestimate the value of softer measures and appreciate that metrics are never a substitute for ongoing service-based conversations with customers.

Traditional service desk metrics include number of calls received (via all channels), number of calls handled by service desk operative, number of service requests and incidents, number of calls handled within and outside SLA targets, number of tickets resolved during first contact, number of tickets escalated (by cause), average caller waiting times, caller abandonment rates, and customer satisfaction. Traditional incident management metrics include number of incidents, number of incidents resolved within SLA targets (for each level of priority), number of incidents escalated (to each level of support), average time to resolve incidents by priority, number of incidents incorrectly recorded, and number of incidents incorrectly assigned to the wrong resolution group.

Unfortunately, the number of incidents received (say) is not a good indicator of service desk and incident management performance. For example, a service desk might be tasked with lowering the volume of incidents received but in doing a better job at resolving incidents, they might actually increase volumes as more users choose to contact them. Conversely, incident volumes might drop the poorer the service they provide, as users either struggle on or seek resolution through alternative channels.

In Butler Group’s opinion, service desk and incident management metrics should focus on how the service desk and level 2 and 3 support add value to the business – through the minimisation of the impact of user (and business) productivity-affecting incidents, at an acceptable and ideally optimal cost. Metrics should also reflect the entire process, not just a subset of activity and, when it comes to the number of service desk and incident management metrics, less is definitely more.

So Butler Group recommends a small basket of weighted metrics that have been agreed with key business stakeholders. As stated earlier, there is no magical out-of-the-box set of metrics that applies to all IT functions. There is, however, a common core that can deliver greater insight, and consequently greater value, to both IT and the parent business. It should be no surprise that these relate to the traditional business goal of achieving the best possible quality at the lowest possible cost.

The first metric is Customer Satisfaction, which is still probably the best indicator of the quality of IT support – both on a transactional and a periodic review basis. The next metrics are key drivers of Customer Satisfaction – First Contact Resolution and Average Speed of Answer – these are also good indicators of IT’s ability to maximise employee and business productivity. In terms of Service Desk efficiency, there are two key metrics – Service Desk Operative Utilisation and Cost Per Call – with the former strongly influencing the latter. The final metric – Service Desk Operative Satisfaction – is easily forgotten or neglected by IT functions. But without it, the persistent drive to deliver higher levels of Customer Satisfaction and Service Desk efficiency may take a heavy toll on Service Desk staff resulting in higher levels of sickness, absenteeism, and ultimately turnover. This can have a knock-on effect that new, less experienced staff will probably adversely affect performance against other key metrics such as Customer Satisfaction, First Contact Resolution, and Cost per Call.

To summarise, in implementing service desk and incident management metrics, an IT function should firstly identify the users of the metrics and their purpose, identify and agree the desired metrics, and set up an appropriate measurement system that allows them to easily monitor performance. The volume and type of metrics used will vary by organisation but a focus on quality, rather than quantity, of metrics is recommended. It is then only through the continual review of such metrics that an IT organisation can demonstrate business alignment and value, and continue to improve its operation – tweaking processes, filling human capability gaps, and improving inter-team communications and co-operation. This ongoing review should also encapsulate the metrics themselves, as there will be occasions where an IT function should not only change its targets, but maybe the metrics themselves.

Republished from