Technical Blog Post
You say metrics, I say OSLC
You need to keep your IT operations running smoothly and you want to reduce the time it takes to resolve problems. And to achieve that, you need access to detailed, real time information about your IT resources from all possible sources ( asset management, configuration management, monitoring and so on.) So great, your vendors and IT teams have implemented scenario "View assets and CI details vi UI preview". Life is good ! But wait ! You have data coming from all sorts of providers. Does your dashboard now have to process incoming traffic from ten million different formats ? And what happens if you add another tool into the mix ?
In addition to being able to retrieve data from multiple sources, you want a common way for that data to be represented. And you want it such that other vendors can easily adopt. This is the issue being worked on by the members of the OSLC performance monitoring work group. The goal of the work group is, to quote directly from the source, to "define a set of resources, formats and RESTful services that may be used by lifecycle tools such as operations dashboards, change management tools, asset management tools and others to obtain performance and availability metrics for resources".
Here's an example of how the perf mon specifies how a metric is described (using Turtle notation) :
@prefix pm: <http://open-services.net/ns/perfmon#> .
@prefix oslc: <http://open-services.net/ns/core#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix dcterms: <http://purl.org/dc/terms/> .
@prefix ex: <http://example.org#> .
@prefix ems: <http://open-services.net/ns/ems#> .
@prefix crtv: <http://open-services.net/ns/crtv#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix bp: <http://open-services.net/ns/basicProfile#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>.
@prefix dbp: <http://dbpedia.org/resource/>.
@prefix qudt: <http://qudt.org/vocab/unit#>.
@base <http://perfmon-provider.example.org/> .
a pm:PerformanceMonitoringRecord ;
ems:observes <rec001#cpuutil10> ;
ems:observes <rec001#avgloginfailsperminute2.4> ;
dcterms:isPartOf <recCS001> ;
ex:aLocalTime "2002-05-30T09:30:10.5" ;
a ems:Measure ; # rdf:type
dcterms:title "CPU Utilization" ;
ems:metric pm:CpuUsed ;
ems:unitOfMeasure dbp:Percentage ;
ems:numericValue 10 ;
a ems:Measure ; # rdf:type
dcterms:title "Average Login Request Failures Per Minute" ;
ems:metric pm:AvgLoginRequestFailures ;
ems:unitOfMeasure pm:PerMinute ;
ems:numericValue 2.4 ;
A reader can readily deduce that the example is showing that CPU utilization is 10% of capacity and that we are experiencing 2.4 login failures per minute on average. But, let's notice how easy it is to programmatically process this payload
- ems:metric, ems:unitOfMeasure,ems:numericValue are required properties. We have a simple and standard way of describing the type and value of each metric
- a pm:PerformanceMonitoringRecord, ems:observes : all metrics about an object are linked together in one resource
- dcterms:isPartOf : all metrics are about a particular object (identified as <recCS001>) but you don't need to know any more than that about it. You can associate your metrics against any resource, even a resource that will be defined in the future
and to show how easy it is to extend the specification
- pm:PerMinute, dbp:Percentage : you can use resource definitions created by the Performance Monitoring work group, from another vocabulary ( dbp: ) or even from your own namespace.
I hope I've perked your interest in how we are using Linked Data and OSLC specifications to solve your use cases. Join us in the effort at http://open-services.net