Services
Instana traces and analyzes every request. Services and endpoints are automatically discovered, and relationships between services, endpoints, and your infrastructure are autocorrelated and stored in our Dynamic Graph.
Based on the data that is collected from tracers and sensors, KPIs are calculated for calls, latency, and erroneous calls. KPIs help you discover the health of every individual service and then the health of your entire infrastructure.
Services are a part of application monitoring and provide a logical view of your system. Services are derived from infrastructure entities such as hosts, containers, and processes. Incoming calls are correlated to infrastructure entities and enriched with infrastructure data; for example, the Kubernetes pod label or SpringBoot application name. After this infrastructure-linking processing step, a service mapping step maps the enriched calls to generate a service name per call based on a set of rules. Instana comes with an extensive set of predefined rules to generate the best possible service name for you automatically. To fine-tune the service mapping, you can create your own custom rules, see customize service mapping.
- Predefined rules
- Customize service mapping
- Manual service configuration (Experimental)
- Unspecified services
- Service flow map
- FAQ
Predefined rules
The following rules are considered in the descending order. When a rule matches, the respective service is created.
Rule | Tags |
---|---|
Custom service rule | Tags defined by the user (see custom service mapping) |
User defined service name from environment variable INSTANA_SERVICE_NAME on process |
{instana.service.name} |
User defined service name via HTTP Header X-Instana-Service |
{call.http.header.x-instana-service} |
User defined service name in sdk span data (set service tag) |
{call.tag.service} |
OpenTelemetry defined service name | {otel_resource.service.name} |
OpenTelemetry service name of the remote service | {otel.peer.service} |
Jaeger defined service name | {jaeger.service.name} |
Zipkin defined service name | {zipkin.service.name} |
IBM MQ Queue Manager | {ibmmq.queuemanager.name} |
ACE Server Name | {ace.broker.name} :{ace.server.name} |
Consul cluster name | Consul@{consul.cluster.name} |
Cassandra cluster name | {cassandra.cluster.name} |
ElasticSearch cluster name | {elasticsearch.cluster.name} |
Couchbase cluster name | {couchbase.cluster.name} |
MongoDB replica set name | {mongo.replicaSetName} |
Kafka cluster name | {kafka.cluster.name} |
ClickHouse cluster name | Clickhouse@{clickhouse.cluster.name} |
Function as a Service span with functionname |
{faas.functionname} |
AWS EC cluster id | {aws.ec.cluster.id} |
AWS RDS cluster name | {aws.rds.cluster.name} |
AWS service type DynamoDB | DynamoDB |
AWS service type S3 | AWS S3 |
Google Cloud service type Storage | Google Cloud Storage |
Azure App Service name | {azure.appservice.name} |
AWS ECS container name | {container.label.com.amazonaws.ecs.container-name} |
AWS ECS task family | {container.label.com.amazonaws.ecs.task-definition-family} |
Kubernetes container name | {kubernetes.container.name} |
Cloud Foundry Application name | {cloudfoundry.application.name} |
Docker Swarm service name | {docker.label.com.docker.swarm.service.name} |
Marathon application id (parsed) | {marathon.app.id-1} |
Nomad task name | {nomad.task.name} |
Rancher 1 project service name | {container.label.io.rancher.project_service.name} |
Container image name (parsed) | {container.image.name-1} |
Application Container (JBoss, Tomcat, WebLogic, WebSphere, MSIIS) deployment name with file extension (parsed) | {call.deployment.name-1} |
Application Container (JBoss, Tomcat, WebLogic, WebSphere, MSIIS) deployment name with no file extensions (parsed) | {call.deployment.name-1} |
Dropwizard name | {dropwizard.name} |
Spring Boot name | {springboot.name} |
JBoss name | {jboss.server.name} {jboss.node.name?} |
WebSphere name | {websphere.server.name} |
WebLogic name | {weblogic.server.name} |
Redis port | Redis@{redis.port} on {host.name} |
Aerospike port | Aerospike@{aerospike.port} on {aerospike.host} |
Neo4j port | Neo4j@{neo4j.port} on {host.name} |
Memcached port | Memcached@{memcached.port} on {host.name} |
Varnish port | Varnish@{varnish.port} on {host.name} |
Clickhouse port | Clickhouse@{clickhouse.httpPort} on {host.name} |
MongoDB database name | MongoDB@{mongo.port} on {host.name} |
Zookeeper port | Zookeeper@{zookeeper.clientPort} on {host.name} |
Solr version | Solr-{solr.version} on {host.name} |
Solr | Solr on {host.name} |
Solr Zookeeper | Solr Zookeeper-{solr.zk_host} |
PostgreSQL port | PostgreSQL@{postgresql.port} on {host.name} |
CockroachDB port | CockroachDB@{cockroachdb.port} on {host.name} |
MySQL port | MySQL@{mysql.port} on {host.name} |
OracleDB port | OracleDB@{oracledb.port} on {host.name} |
MSSQL database SID | MSSQL@{mssql.instance} |
MariaDB port | MariaDB@{mariadb.port} on {host.name} |
Db2 port | Db2@{db2.port} on {host.name} |
Kafka version | Kafka-{kafka.version} on {host.name} |
ActiveMQ broker name | {activemq.broker.name} |
RabbitMQ version | RabbitMQ-{rabbitmq.version} on {host.name} |
RocketMQ version | RocketMQ-{rocketmq.version} on {host.name} |
JVM name from jar file | {jvm.app.name-1} |
JVM name | {jvm.app.name} |
Node.js application with host environment | {nodejs.app.name} |
Python snapshot name | {python.app.name} |
Ruby name | {ruby.app.name} |
Go name | {go.app.name} |
PHP using host-header when available (parsed) | {call.http.host-1} |
PHP | PHP |
PHP for PHP-FPM worker-pool | PHP |
CLR name | {clr.app.name} |
.Net Core name | {netcore.app.name} |
Haskell name | {haskell.app.name} |
Crystal name | {crystal.app.name} |
GCP Cloud Run Service | {gcp.cloudrun.service.name} |
Cloud infrastructure identifier | {cloud.id} |
Shell span | Spawned processes |
RPC service span using object (parsed, with WSDL namespaces) |
{call.rpc.object-1} |
RPC service span using object |
{call.rpc.object} |
Database FTP span | {call.database.connection} |
Database cache-type span with connection (parsed) |
{call.database.type} @ {call.database.connection-1} |
Database cache-type span | {call.database.type} |
MongoDB database span (parsed) | {call.database.schema-1} |
ElasticSearch database span (parsed) | {call.database.connection-1} |
Couchbase / Redis database span (parsed) | {call.database.connection-1} |
AWS S3 database span | AWS S3 |
DynamoDB database span | DynamoDB |
Google Cloud Storage span | Google Cloud Storage |
Azure Storage span | Azure Storage |
CosmosDB database span (parsed) | {call.database.schema-1} |
Generic database span | {call.database.schema} |
Generic database span, using schema from connection (parsed) |
{call.database.connection-1} |
Generic database span, using host from connection (parsed) |
{call.database.connection-1} |
Generic database span, fallback type |
{call.database.type} |
Messaging span, for temporary queues | {call.messaging.type} |
IBM MQ span using address | {call.messaging.address-1} |
Apache RocketMQ span using address | {call.messaging.address-1} |
IBM ACE span using address | {call.messaging.address-1} |
JMS span using address (parsed) | {call.messaging.address-1} |
Messaging span using address (full) | {call.messaging.address} |
Messaging span, no address | {call.messaging.type} |
z/OS Connect span | {call.zcee.serviceName} |
HTTP span with host (parsed) | {call.http.host-1} |
HTTP span with URL (parsed) | {call.http.url-1} |
PHP script span (such as CLI) | {call.php.script.name} |
SDK span | SDK |
RPC (RMI) span, no object |
RMI |
GraphQL Subscriber span | GraphQL Subscribers |
Event Trigger | {call.event.trigger} |
Cloud Metadata Service | Cloud Metadata Service |
A service can be seen as a logical component that provides a public API to the rest of the system, in which the API is made up of its endpoints. A service is being monitored and makes and receives calls. A request to a service results in a single call to a particular endpoint.
Services can be considered in isolation or through the lens of an application perspective. Services often map to one 'unit of deployment', such as a package or container. If multiple instances of this, for example, container operate at the same time, they all map to the same logical service.
Service types are assigned automatically through inheritance from endpoints. For example, if a service has both HTTP and BATCH endpoints, then it is assigned both HTTP and BATCH types. KPIs (Calls, Errors, Latency) for services display an aggregate of all calls, regardless of type.
Customize service mapping
Four methods for customizing the default service mapping are available.
- Create a custom service rule. Even if services are configured, for example, with the
INSTANA_SERVICE_NAME
environment variable, this rule still precedes all other rules in priority. - Apply the
service.name
tag. - Specify the
INSTANA_SERVICE_NAME
environment variable. The limitations are documented in Dynamic Language Sensors. - Specify the HTTP header
X-Instana-Service
.
Create a custom service rule
- From the sidebar, click Applications and select the Services tab.
- Click Configure Services
- Click Add Custom Service Rule
- From the drop-down list, select the tags.
One reason for using a custom service rule is to use existing meta information of your infrastructure components. For example, if you label your Docker containers with domain-specific information, such as com.acme.service-name:myservice
,
to map services from this label select the docker.label
and com.acme.service-name
tags. All calls that pass a container with the label com.acme.service-name
are associated with a service, which is named
by that value, for example, myservice
. There are multiple tags available to create custom service mapping.
You can also add multiple keys for service mapping. Multiple tags are concatenated with a dash. Note all keys need to match, For example, if you want to separate your staging services based on the host zone you can add two keys: host.zone
+ docker.label:com.acme.service-name
. Your services would then be named with a concatenation of the values for the host zone followed by the docker label. That way you would separate out the two services for example prod-myservice
and dev-myservice
.
By using a special tag service.default_name
, a custom service rule can also be used to extend service default rules with more tags. For example, to split the automatically created services by host zones, create the custom service
rule by using the tags service.default_name
and host.zone
.
Setting the service name globally
You can set a service name for all the calls associated with a service. The service name can be set only for the processes that are monitored by an Instana tracer.
To set the service name for all the incoming and outgoing calls from a service, set the INSTANA_SERVICE_NAME
environment variable on the process.
When you set the service name, the name of the process is not changed. To rename the process, use the INSTANA_PROCESS_NAME
environment variable.
You can set a global service name for the following tracers by using the configuration in code:
For more information about the environment variables recognized by Instana, see Environment Variables.
Setting the service name per call
You can set a service name for every call. When you change the service name for a call, the source or destination service of the target entry or exit span is changed.
To change the service name for a call, add a custom tag (custom.tags.service
) to the target span. Choose one of the following target tracer SDKs:
Setting the service name in OpenTelemetry or OpenTracing
To set a service name in OpenTelemtry or OpenTracing, use the function setTag
.
Example: span.setTag('service', 'my-service')
OpenTracing is archived, use OpenTelemetry instead.
Specify the X-Instana-Service HTTP header
By setting the X-Instana-Service
HTTP header on a call, the destination (the service, which receives the HTTP request) is tagged with the value that is provided in the header.
Note: The X-Instana-Service
HTTP header is not automatically collected. For it to work, the Instana agent must be configured , see "Capturing custom HTTP headers".
Manual service configuration (Experimental)
In some cases, the automatic service mapping cannot work out of the box for some calls. Manual service configuration can identify the calls with a tag filter expression by using call tags (for example call.http.host
, call.database.connection
),
and you can,
- either map them to an unmonitored service with a custom service name,
- or link them to an existing database or messaging service that is created from a monitored infrastructure entity.
Manual service configuration can be added, updated, or deleted through API.
Map to unmonitored service with custom name
Calls to a service not monitored by Instana are mapped to a service by using call tags.
For example, HTTP calls to a third-party API such as www.google.com
are mapped to service www.google.com
based on the HTTP host
header (call.http.host
tag). Calls to www.google.fr
or www.google.de
are mapped to different services because the HTTP host
headers are different.
You can map all the calls to different domains to a service named "Google" by adding the following manual service configuration:
{
"tagFilterExpression": {
"type": "TAG_FILTER",
"name": "call.http.host",
"value": "www.google.",
"operator": "STARTS_WITH",
"entity": "NOT_APPLICABLE"
},
"unmonitoredServiceName": "Google",
"description": "Map calls to different google domains to Google service",
"enabled": true
}
Link calls to an existing database or messaging service that is created from a monitored infrastructure entity
If a database or a messaging system is monitored by Instana, calls to that database or messaging system should be mapped to a service based on the infrastructure tag (e.g. MySQL@3306 on demo-host
) thanks to infrastructure correlation.
But sometimes due to the presence of a proxy or a load balancer, the infrastructure correlation may not work because Instana does not monitor or instrument that component and does not know how requests are routed. In consequence, calls will
be mapped to a service based on the call tags (e.g. call.database.schema
, call.database.connection
or call.messaging.address
) as if the destination service is not monitored.
In this case, you will find another service created from the infrastructure entity with no calls. You can use the following manual service configuration to map calls to that existing service:
{
"tagFilterExpression": {
"type": "TAG_FILTER",
"name": "call.database.connection",
"value": "jdbc:mysql://10.128.0.1:3306",
"operator": "CONTAINS",
"entity": "NOT_APPLICABLE"
},
"existingServiceId": "a880d1875389b78b49b19387ef7ab815b313764f", // service id of "MySQL@3306 on demo-host" service, can be found in the "serviceId" url parameter of the service dashboard
"description": "Link calls to MySQL@3306 on demo-host service",
"enabled": true
}
Unspecified services
The Unspecified
service is a special service which acts as a fallback service to the calls which could not be matched to any of the custom or predefined service
mapping rules.
From the service dashboard, you can use the list of endpoints, or better, jump to Analyze Calls
and group by call.name
to figure out what those calls are about.
Usually it surfaces a temporary issue in the Instana backend where for a short amount of time the destination of the calls could not be linked to the underlying process (e.g. because the process just started and the Instana agent has not discovered it yet, while the Instana instrumentation is already capturing calls) and therefore the necessary tags used in the service mappping rules could not be extracted. See here for more information about the role of infrastructure correlation and service mapping.
However sometimes it surfaces a gap in our set of predefined rules and in this case we would like to hear from you to improve them.
Note that for HTTP calls whose destination could not be linked to a process and the Host
header is an IP, they will be mapped to the Unspecified service. In that case you could add the X-Instana-Service
header to the requests to set a meaningful service name.
Service flow map
The service flow map displays the upstream and downstream services of a service.
Service flow map limitation
The service flow map shows accurate data currently only for short running synchronous traces that finish within 20 seconds. In every other case, the service flow map might show inaccurate data.
FAQ
Why am I not seeing any services listed?
The reason no services are listed is due to either the agent not running in APM mode, having no agent installed; therefore, no traces were collected, or if you have an agent installed, traces were not collected during your selected timeframe.
To check whether an agent is installed and running, click Infrastructure to view any hosts on the infrastructure map that may have an agent, or click More > Agents to see a list of installed agents. If there are no agents listed, see the documentation on how to install an agent or create an application perspective.
Why does the service name sometimes change?
The service mapping relies on infrastructure data. However, in some scenarios, the trace data cannot be enriched with infrastructure data. For example, trace data might not be enriched shortly after an Instana agent, or a monitored process was started. It takes some time until the monitored application gets fully instrumented and until tracers and sensors start collecting data. If the first few traces are collected before all sensors become active, these traces might be missing some or even all infrastructure data. In such cases, the service name used is less specific; for example, HTTP host rather than SpringBoot application name.
A mechanism known as resilient mapping is used to avoid service-mapping issues that are related to restarts of Instana agents or monitored processes. Resilient mapping takes effect if the infrastructure data for mapping calls to the correct service is unavailable. The resilient mapping can assign a call to the right service by using a cache of previous mapping results. However, this can lead to an unexpected service name, for example, if the cache contains infrastructure data that is outdated. The expected service comes back automatically when the resilient cache is updated.