Benchmarking data reference
The Benchmarking application’s payload includes three types of metrics. This section describes their purpose. The Apptio Benchmarking product is a self-service SaaS application that you can use to do peer benchmark cost comparisons. The Benchmarking addresses the what , but not the why , how or what’s next . You can use your organization’s costs, compare them with the benchmark metrics and research/rationalize your actuals to determine the why and how. You can then use this understanding to create actionable next steps, outcomes, and deliver value. You can even work with third party consultants to help interpret the comparisons and provide advice/recommendations.
Industry Benchmark metrics
Industry metrics enables you to compare overall IT Spend to Industry peers.
| Type | Description |
|---|---|
|
Target audience |
IT Leadership |
|
Purpose |
Use high-level industry metrics to analyze IT spend to help make strategic decisions informed by these metrics, but not solely by benchmark comparison. IT investment levels vary year to year and benchmark provides a reference point. It’s more important to understand the drivers of actual costs in comparison to the benchmark metric than it is evaluate it as good or bad (based on being above or below the benchmark). |
| Usage |
Semi-annual reviews |
| Guidance |
These metrics should be used together since any one metric does not tell the complete story. Understanding and analyzing the relationships between these metrics and other key business measures helps drive decisions. For example, the metric IT Spend per Employee can be impacted by the following:
The organization’s IT Spend per Employee may be higher than the benchmarks due some of the above (justifiable) drivers. A decision on lowering this metric needs to be made within this context. |
Table 1: Industry Benchmark Intent
There are five high-level metrics on organization financials that help you compare yourself with peers.
| Metric |
How to interpret – outcomes and questions to consider |
|---|---|
|
IT Spend as % of Org Revenue |
|
|
IT Spend as % of Org OpEx |
|
|
IT Operational Costs % of Spend |
|
|
Percent of FTEs in IT |
|
|
IT Spend per Employee |
|
Table 2: Industry Metrics Outcomes
Apptio uses Rubin Worldwide ( Learn more ), a leading provider of technology economics research and Benchmarking data to drive these benchmarks.
IT OpEx metrics
The IT OpEx benchmark metrics provide a high level directional guidance on the enterprise’s OpEx characteristics.
| Type | Description |
|---|---|
|
Target audience |
IT Leadership |
|
Purpose |
Use IT OpEx metrics to understand the organization’s spend profile at both financial and IT Infrastructure levels. Companies have investment strategies that determine their extent of spend in specific aspects of IT. These metrics help call out their spend profiles and provide the ability to evaluate their perspective in comparison to the Benchmark profiles. This will help drive strategic decisions on continuing or changing their profile. |
| Usage |
Semi-annual reviews |
| Guidance |
These metrics are the next level detail on IT Spend (after the initial IT Spend metrics evaluation). The two spend distributions, cost pool and IT tower, should be used in conjunction as they are related. In the initial use phase, use the IT OpEx metrics to help validate spend characteristics at both the financial and infrastructure level. Doing so helps catch inconsistencies like high percentage spend on data center tower when facilities and power percentage spend is low. Use the archetype spend characteristics to check on your organization’s perspective of your spend profile. For example, if a company thinks of itself as a heavily outsourced company, they can validate their spend profile against typical outsourced benchmark metrics. Subsequent uses of these metrics are to help track progress to help drive and track decisions on spend characteristics. For example, let’s say a company has decided they need to invest more in-house and reduce outsource spend. The IT OpEx metrics can help monitor these spend distributions. |
Table 3: IT OpEx Intent
The metrics provided include:
| Metric |
How to interpret – outcomes and questions to consider |
|---|---|
|
OpEx by Cost Pool Distribution |
|
|
OpEx by Resource Tower Distribution |
|
|
Spend Archetypes |
|
Table 4: IT OpEx Outcomes
-
OpEx by Cost Pool Distribution – These metrics provide characteristics at the financial layer by comparing the enterprise’s OpEx with the distribution of OpEx split by Cost Pool percentages based on Apptio Community Data (ACD) samples.
-
OpEx by Resource Tower Distribution – These metrics provide characteristics at the Infrastructure layer by comparing the enterprise’s OpEx with the distribution of OpEx split by Resource Tower percentages based on Apptio Community Data (ACD) samples.
Spend Archetypes – There are four cost pool profiles driven by their dominant spend characteristics at the financial layer. An enterprise can compare its cost pool spend characteristics with these four archetypes to determine if their spend pattern is aligned to one of them and if it meets their perspective on the enterprise spend. . The archetypes are also described and used in the IT Economics report.
These archetypes are derived from the Apptio Community Data (ACD). ACD is anonymized aggregated Apptio opted-In customer data. The four archetypes are:
- Vendor centric –These companies tend to spend more on Outside Services (including Consultants, Managed Service Providers, and Cloud Service Providers) instead of on Labor or Technology (Hardware or Software). Outside Services is higher; Internal Labor, External Labor, Hardware and Software are lower.
- People centric –These companies favor spending more on Internal Labor, and less on External Labor and Software. Internal Labor is higher; External Labor and Software are lower.
-
Technology-centric – In this category, we found two “sub-archetypes” of
companies that favor spending on either Hardware or Software
.
- Hardware-centric – Higher Hardware spend associated with lower Labor, Outside Services and Software spend. Hardware is higher; Internal Labor, External Labor, Outside Services, and Software are lower
- Software-centric – Higher Software and Internal Labor spend displaces External Labor and Outside Services spend. Software and Internal Labor are higher; External Labor and Outside Services are lower.
Infrastructure Benchmarks
This category includes peer benchmark metrics at the tower and sub-tower level. Customers are able to compare their tower and sub-tower unit costs and FTE efficiencies.
| Type | Description |
|---|---|
|
Target audience |
IT Leadership, I&O, IT Finance, TBMO |
|
Purpose |
|
| Usage |
|
| Guidance |
These metrics focus on tower spend characteristics per infrastructure. They incorporate infrastructure volumes to compute subtower unit costs from total subtower costs from Cost Transparency. Get an apples-to-apples peer comparison with peer benchmark unit costs. The organization’s IT cost and volume need to be assigned and allocated to the sub-tower level. The FTE Efficiency metrics provide a view of costs under IT management for the subtower. This is useful as labor is the dominant cost for infrastructure. It helps understand IT coverage of the tower and rationalize it. For example, a company might have low FTE Efficiency but that is justifiable as the IT strategy is to provide high quality support. The IT OpEx metrics provide across the board comparators while the Infrastructure metrics gets into tower/sub-tower specific analysis. Typically, the Infrastructure benchmarks are used after the IT OpEx analysis to review infrastructure specific costs. |
Table 5: Infrastructure Intent
The peer benchmark data is from ISG , a leading technology insights, market intelligence and advisory services organization (storage tower metrics is the exception and come from Rubin Worldwide).
The metrics included are:
| IT Tower | IT Sub Tower | Unit of Measure | Unit Cost | Cost By Cost Pool | FTE Efficiency |
|---|---|---|---|---|---|
| Compute | Windows | Logical/Physical Servers | √ | √ | √ |
| Linux | Logical/Physical Servers | √ | √ | √ | |
| Unix | Logical/Physical Servers | √ | √ | √ | |
| Mainframe | MIPS | √ | √ | √ | |
| Storage (see NOTE below) | Tier 1 | Installed TBs | √ | √ | |
| Tier 2 | Installed TBs | √ | √ | ||
| Tier 3 | Installed TBs | √ | √ | ||
| Tier 4 | Installed TBs | √ | √ | ||
| Network | LAN | Active LAN Ports | √ | √ | √ |
| WAN | End User Connected Devices | √ | √ | √ | |
| Voice | Handset | √ | √ | √ | |
|
End User
|
Workspace (PC) | Desktops, Laptops & Thin Clients | √ | √ | |
| Service Desk | Contacts | √ | √ | √ | |
| Communications | IT Users | √ | √ | ||
| Output | Millions of Images | √ | √ | √ | |
| IT Management | IT Users | √ | √ | √ | |
| IT Users | √ | √ |
Table 6 Infrastructure Benchmark Metrics
NOTES
- For Storage, the Cost by Cost Pool distribution is provided (by the benchmark provider) at the overall storage level. It is then applied to each of the 4 tiers.
- The above list is for TBM Taxonomy v1.0.
Using these metrics you can undertake the following:
| Usage |
How to interpret – outcomes and questions to consider |
|---|---|
| Validation – Align spend to the TBM Taxonomy |
These customer values require aligning data to the right buckets to the TBM Taxonomy and check quality of allocations. It helps Establish and Maintain Trust in IT Actual Costs (learn more) . It is common for users to initially start infrastructure with incomplete data. Further, these metrics introduce and incorporate volume into the Cost Transparency model. This drives a deeper understanding of cost drivers.
|
| Comparison – IT Spend to Infrastructure Benchmarks |
Benchmarks address the what , but not the why , how or what’s next :
A typical variance (comparison) sequence is:
|
| Tracking – Unit Cost and FTE Efficiency |
Benchmark metrics are reference points; identify and set incremental achievable targets that are informed by benchmarks. The decisions made out of the above Comparison analysis can lead to incremental improvements and potentially larger initiatives (identified below, and typically in conjunction with leadership reviews). Set Informed Fiscal Targets - Based on actions identified using peer comparisons, set targets for tracking progress on unit cost and FTE efficiency. This tracking can be done over time with live, up-to-date actuals and peer benchmark data (benchmark data is refreshed every 6 months) - Get up-to-date Peer Comparisons . |
| Review – Leadership review on Progress |
Use the above metrics tracking towards targets in periodic reviews with the leadership team. |
| Initiative – Identify and drive initiative |
The reviews drive strategy decisions on IT investments and trigger Initiatives. The impact of these initiatives can be tracked by viewing impact on Cost using the above guidelines. |
Table 7: Infrastructure Outcomes
For each of these metrics, the peer benchmark data provider ISG provides a mean along with adjusters in three independent dimensions:
- Scale – Adjusters based on unit of measure volumes. This is the primary influencer on infrastructure metrics. The ISG scale model for Unit Cost is a step function of unit cost over volume of units. The model is applicable for a specified volume range. The step function, refer Error! Reference source not found. , is applicable across a range of high (cost) threshold to a low (cost) threshold.
- Region – Adjusters based on region. ISG provides adjusters for the regions NA, EMEA, and APAC; the definition of the region is the location where an organization has its headquarters.
- Industry – Adjusters based on industry buckets. ISG provides adjusters for the following buckets: Banking, Financial Services and Insurance (BFSI), Energy, Manufacturing, communications and Media and Entertainment (MCM), Tech, and Other.
In the Benchmarking Product, refer (Apptio, Infrastructure Benchmarking, 2015), the infrastructure peer metrics can be calculated by viewing the distribution and selecting the adjusters. Typically the Scale adjuster is used as it is the most significant contributor.
The Benchmarking product video presentation provides a description of the Benchmarking components, their value along with the list of metrics and their data providers.