Wellbore API (Open Data for Industries)

Expand your data platform to multiple domains by splitting the domain data flows from the domain storage and access concerns. That way you can access and manipulate various data types that are acquired and interpreted in wellbores.

Open Data for Industries Wellbore DDMS API for data lifecycle management provides several API endpoint sets to manage the following domains:
Well
An actual hole drilled in the ground. It facilitates the exchange of fluids between a subsurface reservoir and the surface (or another reservoir). It can also enable the detection and measurement of rock properties.
Wellbore
A vertical or horizontal path that is drilled through the surface, from the Well Origin to a terminating point.
Well log
A set of measurements that are recorded discretely or continually within a wellbore.
Trajectory
The directions that are spanning to vertical and horizontal space, in which the well is drilled.
Marker
Delineated important zones on the wellbore. Markers are used for comparing between the boreholes and providing subsurface geologic mapping.
Log
Recorded information and data about the various geological components.
Log set
Grouped data that is collected at various levels or at different time for a set of geological components.
Dips
The magnitude of the inclination of a plane from horizontal orientation.
Dip set
The group of the measurement of the inclination of a plane from horizontal orientation.

For more information, see Wellbore DDMS API for data lifecycle management.

Open Data for Industries provides you with operations that supplement the management of the geological data throughout the data lifecycle:
Search Geological data
A set of APIs that search the Open Data for Industriesstorage layer data, which supports usage patterns. For more information, see Wellbore DDMS API for data contextualization.
Fast-Search Geological data
Used to bulk search the APIs for fast retrieval of related data components. For more information, see Wellbore DDMS API for data contextualization.
Log recognition
Used to automatically tag and catalog the logs’ data with the help of catalog definitions. For more information, see Wellbore DDMS API for data enrichment.

To use the Wellbore API, see the Wellbore API reference.

Using Wellbore DDMS API

To use the Wellbore DDMS API, follow these steps:
  1. Choose a partition.

    The Open Data for Industries publishes schema definitions for different tenants, depending on the different accounts available on the system. A user might belong to many accounts. For example, a user might belong to both their own account and a customer's account. When you log in to the industry applications, you choose which account is active. To use the Wellbore DDMS API, specify the active account by using the data-partition-id parameter as part of the request header.

  2. Create data groups.

    Create data groups and assign users to these groups. For data access authorization purposes, the groups in the following example must exist already. Create groups by using the Entitlements API and assign users to these groups or role. For more information, see Entitlements API.

    • users.datalake.viewers
    • users.datalake.editors
    • users.datalake.admins
  3. Use the Wellbore DDMS API methods to manage well data on Open Data for Industries.

    To use the Wellbore API, see the Wellbore API reference.

Wellbore DDMS API for data lifecycle management

Wellbore DDMS API is segregated to support the lifecycle management of various well-related geological components.

Table 1. Endpoint sets for geological data types
Geological Component data types API endpoint Description API reference
Well GET ​/ddms​/v2​/wells​/{wellid} Manage the well data lifecycle. Get the Well object using its ID.
DELETE ​/ddms​/v2​/wells​/{wellid} Delete the Well.
GET ​/ddms​/v2​/wells​/{wellid}​/versions Get all versions of the Well.
GET ​/ddms​/v2​/wells​/{wellid}​/versions​/{version} Get the given version of the Well.
POST ​/ddms​/v2​/wells Create or update the Wells using the provided schema.
Well V3

GET ​/ddms​/v3​/wells​/{wellid}

Manage the well data lifecycle. Get the Well object by using the osdu schema.
DELETE ​/ddms​/v3​/wells​/{wellid} Delete the Well.
GET ​/ddms​/v3​/wells​/{wellid}​/versions Get all versions of the Well.
GET ​/ddms​/v3​/wells​/{wellid}​/versions​/{version} Get the given version of the Well by using the osdu well schema.
POST ​/ddms​/v3​/wells Create or update the Wells by using the osdu schema.
Wellbore V2 GET ​/ddms​/v2​/wellbores​/{wellboreid} Manage the wellbore data lifecycle. Get the Wellbore object by using its ID.
DELETE ​/ddms​/v2​/wellbores​/{wellboreid} Delete the Wellbore.
GET ​/ddms​/v2​/wellbores​/{wellboreid}​/versions Get all versions of the Wellbore.
GET ​/ddms​/v2​/wellbores​/{wellboreid}​/versions​/{version} Get the given version of the Wellbore using the defined schema.
POST ​/ddms​/v2​/wellbores Create or update a Wellbore.
Wellbore V3 GET ​/ddms​/v3​/wellbores​/{wellboreid} Manage the wellbore data lifecycle. Get the Wellbore object by using the osdu wellbore schema.
DELETE ​/ddms​/v3​/wellbores​/{wellboreid} Delete the wellbore.
GET ​/ddms​/v3​/wellbores​/{wellboreid}​/versions Get all versions of the Wellbore.
GET ​/ddms​/v3​/wellbores​/{wellboreid}​/versions​/{version} Get the given version of the Wellbore by using the OSDU wellbore schema.
POST ​/ddms​/v3​/wellbores Create or update the wellbores by using the osdu wellbore schema.
Well logs GET ​/ddms​/v3​/welllogs​/{welllogid} Manage the well log data lifecycle. Get the Well Logs by using the osdu schema.
DELETE ​/ddms​/v3​/welllogs​/{welllogid} Delete the Well Logs.
GET ​/ddms​/v3​/welllogs​/{welllogid}​/versions Get all versions of the Well Logs.
GET ​/ddms​/v3​/welllogs​/{welllogid}​/versions​/{version} Get the given version of the Well Logs by using the osdu well log schema.
POST ​/ddms​/v3​/welllogs Create or update the Well Logs by using the osdu schema.
Trajectory

GET ​/ddms​/v2​/trajectories​/{trajectoryid}

Manage the trajectory data lifecycle. Get the Trajectory object using its ID.
DELETE ​/ddms​/v2​/trajectories​/{trajectoryid} Delete the Trajectory.
GET ​/ddms​/v2​/trajectories​/{trajectoryid}​/versions Get all versions of the Trajectory.
GET ​/ddms​/v2​/trajectories​/{trajectoryid}​/versions​/{version} Get the given version of Trajectory.
POST ​/ddms​/v2​/trajectories Create or update the trajectories.
GET ​/ddms​/v2​/trajectories​/{trajectoryid}​/data Return full bulk data within the specified filters.
POST ​/ddms​/v2​/trajectories​/{trajectoryid}​/data Overwrite the specified data to the trajectory.
Marker GET ​/ddms​/v2​/markers​/{markerid} Manage the marker data lifecycle. Get the Marker object using its ID.
DELETE ​/ddms​/v2​/markers​/{markerid Delete the marker.
GET ​/ddms​/v2​/markers​/{markerid}​/versions Get all versions of the marker.
GET ​/ddms​/v2​/markers​/{markerid}​/versions​/{version} Get the given version of marker.
POST ​/ddms​/v2​/markers Create or update the markers.
Log

GET ​/ddms​/v2​/logs​/{logid}

Manage the log data lifecycle. Get the log object using its data ecosystem ID.
DELETE ​/ddms​/v2​/logs​/{logid} Delete the log.
POST ​/ddms​/v2​/logs Create or update the logs.
GET ​/ddms​/v2​/logs​/{logid}​/versions Get all versions of the log.
GET ​/ddms​/v2​/logs​/{logid}​/versions​/{version} Get the given version of log.
GET ​/ddms​/v2​/logs​/{logid}​/data Return full bulk data within the specified filters.
POST ​/ddms​/v2​/logs​/{logid}​/data Writes the specified data to the log or overwrites it, if it already exists.
POST ​/ddms​/v2​/logs​/{logid}​/upload_data Writes the specified data to the log. Supports JSON files.
GET ​/ddms​/v2​/logs​/{logid}​/statistics Data statistics. Returns count, mean, std, min, max and percentiles of each column.
GET ​/ddms​/v2​/logs​/{logid}​/versions​/{version}​/data Returns all data within the specified filters.
GET ​/ddms​/v2​/logs​/{logid}​/decimated Returns a decimated version of all data within the specified filters.
Log set. GET /ddms/v2/logsets/{logsetid} Manage the log set data lifecycle. Get the log set object using its ID.
DELETE /ddms/v2/logsets/{logsetid} Delete the log set object.
GET /ddms/v2/logsets/{logsetid}/versions Get all the versions of the log sets.
GET /ddms/v2/logsets/{logsetid}/versions/{version} Get the log sets object by using its ID.
POST /ddms/v2/logsets Create or update the log set.
Dips GET ​/ddms​/v2​/dipsets​/{dipsetid}​/dips Manage the dips data lifecycle. Get dips. Returns the dips from the dip set for an index you provide. It finishes within the number of dips in the query parameters. If nothing is specified, it returns all dips that are specified from the dip set.
POST ​/ddms​/v2​/dipsets​/{dipsetid}​/dips Define the dips of the dip set. Replaces previous dips with the provided dips. Sorts dips by reference and azimuth.
POST ​/ddms​/v2​/dipsets​/{dipsetid}​/dips​/insert Insert dips into a dip set.
GET ​/ddms​/v2​/dipsets​/{dipsetid}​/dips​/query Search a dip within a reference interval and specific classification.
GET ​/ddms​/v2​/dipsets​/{dipsetid}​/dips​/{index} Get a dip at index. Returns a dip from a dip set at a given index.
DELETE ​/ddms​/v2​/dipsets​/{dipsetid}​/dips​/{index} Delete a dip.
PATCH ​/ddms​/v2​/dipsets​/{dipsetid}​/dips​/{index} Update dip.
Dip set. POST ​/ddms​/v2​/dipsets Manage the dip set data lifecycle. Create or update the Dip sets.
GET ​/ddms​/v2​/dipsets​/{dipsetid}​/versions​/{version} Get the Dip set version using its ID.
GET ​/ddms​/v2​/dipsets​/{dipsetid}​/versions Get all versions of the dip set.
GET ​/ddms​/v2​/dipsets​/{dipsetid} Get the Dip set object using its ID.
DELETE ​/ddms​/v2​/dipsets​/{dipsetid} Delete the Dip set.

Wellbore DDMS API for data contextualization

Use data contextualization to discover data in the right domain context. The Wellbore DDMS API has a rich set of endpoints to create and run a contextual search. The following table details out the simple and fast search endpoints.

Table 2. Endpoint sets for data contextualization
Context Setting Through API endpoint Description API reference
Search POST ​/ddms​/query Do a simple search for data in the domain context. Do a query.
POST ​/ddms​/query_with_cursor Do a query with cursor.
POST ​/ddms​/query​/wellbores

Do a query with cursor that gets all Wellbore objects.

POST ​/ddms​/query​/wellbores​/bydistance

Get all Wellbore objects in a specific area. The specific areas that are defined by a circle based on its center coordinates (latitude, longitude) and radius (meters).

POST ​/ddms​/query​/wellbores​/byboundingbox

Get all Wellbore objects in a specific area. The specific area is defined by a square based on its upper left coordinates (latitude, longitude) and its lower right coordinates (longitude, latitude).

POST ​/ddms​/query​/wellbores​/bygeopolygon

Get all Wellbore objects in a specific area. The specific area is defined by a polygon based on each of its coordinates (latitude, longitude) with a minimum of three.

POST ​/ddms​/query​/wellbore​/{wellboreId}​/logsets Query with cursor, search logSets by wellbore ID.
POST ​/ddms​/query​/wellbores​/{wellboreAttribute}​/logsets Query with cursor, search logSets by wellbore attribute.
POST ​/ddms​/query​/logs Get all Logs objects.
POST ​/ddms​/query​/wellbore​/{wellboreId}​/logs Query with cursor, search logs by wellbore ID.
POST ​/ddms​/query​/wellbores​/{wellboreAttribute}​/logs Query with cursor, search logs by wellbore attribute.
POST ​/ddms​/query​/logset​/{logsetId}​/logs Query with cursor, search logs by logSet ID.
POST ​/ddms​/query​/logsets​/{logsetAttribute}​/logs Query with cursor, search logs by logSet attribute.
POST ​/ddms​/query​/wellbore​/{wellboreId}​/markers Query with cursor, search markers by wellbore ID.
Fast Search

POST ​/ddms​/fastquery​/wellbores

Do a fast bulk search in a wider or aggregated domain context. Get all Wellbores ID objects.
POST ​/ddms​/fastquery​/wellbores​/bydistance Get all Wellbores ID objects in a specific area. The specific area is defined by a circle based on its center coordinates (latitude, longitude) and radius (meters).
POST ​/ddms​/fastquery​/wellbores​/byboundingbox Get all Wellbores ID objects in a specific area. The specific area is defined by a square based on its upper left coordinates (latitude, longitude) and its lower right coordinates (longitude, latitude).
POST ​/ddms​/fastquery​/wellbores​/bygeopolygon Get all Wellbores ID objects in a specific area. The specific area is defined by a polygon based on each of its coordinates (latitude, longitude) with a minimum of three.
POST ​/ddms​/fastquery​/wellbore​/{wellbore_id}​/logsets Get all LogSets ID objects by using the related Wellbore ID.
POST ​/ddms​/fastquery​/wellbores​/{wellbore_attribute}​/logsets Get all LogSets ID objects by using a specific attribute of the Wellbores.
POST ​/ddms​/fastquery​/logs Get all Logs objects.
POST ​/ddms​/fastquery​/wellbore​/{wellbore_id}​/logs Get all Logs ID objects by using their relationship with the Wellbore ID.
POST ​/ddms​/fastquery​/wellbores​/{wellbore_attribute}​/logs Get all Logs ID objects by using a specific attribute of the Wellbores.
POST ​/ddms​/fastquery​/logset​/{logset_id}​/logs Get all Logs ID objects by using its relationship with the Logset ID.
POST ​/ddms​/fastquery​/logsets​/{logset_attribute}​/logs Get all Logs ID objects by using a specific attribute of the LogSets.
POST ​/ddms​/fastquery​/wellbore​/{wellbore_id}​/markers Get all Markers ID objects by using its relationship with the Wellbore ID.

Wellbore DDMS API for data enrichment

Your data is acquired from different sources with different conventions followed. Therefore, you need a mechanism to classify and tag the data at the system and identify it as different log mnemonics. To help you search and identify the log data that is logged, a classification to groups or families based on the unit of measurement is created.

The Log Recognition API endpoint uses a default catalog of assignment rules. The default catalog identifies the log family by using the log name and or mnemonics, the log unit, and the description provided.

Important: The default catalog is immutable and cannot be modified.

In case you need a new set of assignment rules per data partition, you create a custom catalog.

Important: The custom catalog always has a priority over the default catalog.

The Log Recognition service also provides you with a way to assign the family attribute to logs. This assignment is done by using the log name (or its mnemonics), the description of the log, and the log unit of measurement.

The following table illustrates examples of how the Log Recognition service automatically tags and catalogs the logs’ data based on catalog definitions.

For example, if the curve name is GRD and the unit is GAPI, then the family can be identified as Gamma Ray.

The log recognition is achieved through the POST /api/log-recognition/recognize API endpoint. For more information, see Recognize family and unit endpoint.

Table 3. Examples of automatic tagging and cataloging of the data in logs.
Curve Name Unit Description Family
GRD GAPI LDTD Gamma Ray Gamma Ray
HD01 g/cc SMOOTHED AZIMUTHAL DENSITY - BIN 01 Bulk Density
DFHF0_FSI   Filtered Water Holdup Electrical Probe 0 Water Holdup

How to create a custom catalog

Table 4. Log recognition endpoints
Recognition Category API Endpoint Description API Reference
Recognize POST ​/log-recognition​/family Classify the logs over various catalogs. Recognize family and unit. Find the most probable family and unit by using family assignment rule based catalogs. User-defined catalog has the priority.
Catalog PUT ​/log-recognition​/upload-catalog Create and or update custom catalogs. Upload user-defined catalog with family assignment rules for specific partition ID. If a catalog exists, it is replaced. It takes maximum of 5 minutes to replace the existing catalog. Hence, any call to retrieve the family must be made after 5 minutes of uploading the catalog.

The custom catalog consists of the catalog item and the main family catalog item. The following attributes must be defined:

Unit of the log
Family of the log
Rule or Mnemonic
Family assignment rule.
Main Family
Unit of the Family
The "Unit" attribute is used to define which unit the Family has.

To create a custom catalog, use the PUT /log-recognition/upload-catalog API endpoint. For more information, see Upload user-defined catalog with family assignment rules.

You can add a rule or override an existing family assignment rule by changing the attribute’s values in the following sample catalog.
Note: You need to save the catalog in JSON format.
{
  "data": {
    "family_catalog": [
      {
        "unit": "Kg/m3",
        "family": "Bulk Density",
        "rule": "BL1M"
      },
      {
        "unit": "OHMM",
        "family": "Deep Resistivity",
        "rule": "RDPL"
      },
      {
        "unit": "mm",
        "family": "Caliper",
        "rule": "PFC2"
      },
      {
        "unit": "OHMM",
        "family": "Micro Spherically Focused Resistivity",
        "rule": "MICR"
      },
      {
        "unit": "mS/ft",
        "family": "Conductivity - Deep Induction",
        "rule": "ALCD1"
      },
      {
        "unit": "PU",
        "family": "Thermal Neutron Porosity",
        "rule": "CNCQH"
      },
      {
        "unit": "PU",
        "family": "Thermal Neutron Porosity",
        "rule": "CNCQH2"
      }
    ],
    "main_family_catalog": [
      {
        "MainFamily": "Density",
        "Family": "Bulk Density",
        "Unit": "g/cm3"
      },
      {
        "MainFamily": "Resistivity",
        "Family": "Deep Resistivity",
        "Unit": "ohm.m"
      },
      {
        "MainFamily": "Borehole Properties",
        "Family": "Caliper",
        "Unit": "in"
      },
      {
        "MainFamily": "Resistivity",
        "Family": "Micro Spherically Focused Resistivity",
        "Unit": "ohm.m"
      },
      {
        "MainFamily": "Conductivity",
        "Family": "Conductivity - Deep Induction",
        "Unit": "mS/m"
      },
      {
        "MainFamily": "Porosity",
        "Family": "Thermal Neutron Porosity",
        "Unit": "v/v"
      }
    ]
  }

Wellbore DDMS API for chunks and data change

Open Data for Industries supports a time-driven data upload for the well logs and well trajectory domains.

These domains are measured in chunks over a period, by using sessions.

The Wellbore API supports ingestion of such chunks with the help of the following API endpoints:

Domain API Endpoint Description API Reference
WellLog POST /ddms/v3/welllogs/{record_id}/sessions Create a session identifier for the WellLog record. Create a new session on the given record for writing bulk data.
GET /ddms/v3/welllogs/{record_id}/sessions List sessions for the WelLog record. List sessions for specific record.
GET /ddms/v3/welllogs/{record_id}/sessions/{session_id} Get a specific session for a specific WellLog record. Get specific session for specific record.
PATCH /ddms/v3/welllogs/{record_id}/sessions/{session_id} Update the session with a "commit" or "abandon" tag. Update a session, either commit or abandon.
POST /ddms/v3/welllogs/{record_id}/sessions/{session_id}/data Send a data chunk.

A session must be in a complete or commit state when all the chunks are sent. This creates new single version, which aggregates all the existing sessions and all the previous bulk.

This endpoint supports JSON and Parquet format but the Content_Type parameter must be set.

If you use JSON format, the orient must be set.

This endpoint supports HTTP chunked encoding.

Send a data chunk. Session must be complete/commit once all chunks are sent.
Wellbore Trajectory V3 POST /ddms/v3/wellboretrajectories/{record_id}/sessions Create a session identifier for the Wellbore trajectory record. Create a new session for the record for writing bulk data.
GET /ddms/v3/wellboretrajectories/{record_id}/sessions List sessions for the Wellbore trajectory record. List sessions of the given record.
GET /ddms/v3/wellboretrajectories/{record_id}/sessions/{session_id} Get a specific session for a specific Wellbore trajectory record. Get specific session for specific record.
PATCH /ddms/v3/wellboretrajectories/{record_id}/sessions/{session_id} Update the session with a "commit" or "abandon" tag. Update a session, either commit or abandon.
POST /ddms/v3/wellboretrajectories/{record_id}/sessions/{session_id}/data Send a data chunk.

A session must be in a complete or commit state when all the chunks are sent. It creates new single version, which aggregates all the existing sessions and all the previous bulk.

This endpoint supports JSON and Parquet format but the Content_Type parameter must be set.

If you use JSON format, the orient must be set.

This endpoint supports HTTP chunked encoding.

Send a data chunk.

Wellbore DDMS API endpoint permissions

The Wellbore DDMS API is divided into logical blocks.

  • Endpoints for data lifecycle management.
  • Endpoints, which supplement the consumption or processing of the data. Such endpoints are the search and recognition endpoints.

Both categories have different set of minimum permissions that are required for access. Check the table to identify the permissions you need.

Table 5. Endpoint permissions
Domain Components API endpoint Minimum permissions
Well GET ​/ddms​/v2​/wells​/{wellid} users.datalake.viewers
DELETE ​/ddms​/v2​/wells​/{wellid} users.datalake.editors
GET ​/ddms​/v2​/wells​/{wellid}​/versions users.datalake.viewers
GET ​/ddms​/v2​/wells​/{wellid}​/versions​/{version} users.datalake.viewers
POST ​/ddms​/v2​/wells users.datalake.editors
GET ​/ddms​/v3​/wells​/{wellid} users.datalake.viewers
DELETE ​/ddms​/v3​/wells​/{wellid} users.datalake.editors
GET ​/ddms​/v3​/wells​/{wellid}​/versions users.datalake.viewers
GET ​/ddms​/v3​/wells​/{wellid}​/versions​/{version} users.datalake.viewers
POST ​/ddms​/v3​/wells users.datalake.editors
Wellbore GET ​/ddms​/v2​/wellbores​/{wellboreid} users.datalake.viewers
DELETE ​/ddms​/v2​/wellbores​/{wellboreid} users.datalake.editors
GET ​/ddms​/v2​/wellbores​/{wellboreid}​/versions users.datalake.viewers
GET ​/ddms​/v2​/wellbores​/{wellboreid}​/versions​/{version} users.datalake.viewers
POST ​/ddms​/v2​/wellbores users.datalake.editors
GET ​/ddms​/v3​/wellbores​/{wellboreid} users.datalake.viewers
DELETE ​/ddms​/v3​/wellbores​/{wellboreid} users.datalake.editors
GET ​/ddms​/v3​/wellbores​/{wellboreid}​/versions users.datalake.viewers
GET ​/ddms​/v3​/wellbores​/{wellboreid}​/versions​/{version} users.datalake.viewers
POST ​/ddms​/v3​/wellbores users.datalake.editors
Well log GET ​/ddms​/v3​/welllogs​/{welllogid}

users.datalake.viewers

DELETE ​/ddms​/v3​/welllogs​/{welllogid} users.datalake.editors
GET ​/ddms​/v3​/welllogs​/{welllogid}​/versions users.datalake.viewers
GET ​/ddms​/v3​/welllogs​/{welllogid}​/versions​/{version} users.datalake.viewers
POST ​/ddms​/v3​/welllogs users.datalake.editors
Trajectory GET ​/ddms​/v2​/trajectories​/{trajectoryid} users.datalake.viewers
DELETE ​/ddms​/v2​/trajectories​/{trajectoryid} users.datalake.editors
GET ​/ddms​/v2​/trajectories​/{trajectoryid}​/versions users.datalake.viewers
GET ​/ddms​/v2​/trajectories​/{trajectoryid}​/versions​/{version} users.datalake.viewers
POST ​/ddms​/v2​/trajectories users.datalake.editors
GET ​/ddms​/v2​/trajectories​/{trajectoryid}​/data users.datalake.viewers
POST ​/ddms​/v2​/trajectories​/{trajectoryid}​/data users.datalake.editors
Marker

GET ​/ddms​/v2​/markers​/{markerid}

users.datalake.viewers
DELETE ​/ddms​/v2​/markers​/{markerid users.datalake.editors
GET ​/ddms​/v2​/markers​/{markerid}​/versions users.datalake.viewers
GET ​/ddms​/v2​/markers​/{markerid}​/versions​/{version} users.datalake.viewers
POST ​/ddms​/v2​/markers users.datalake.editors
Log

GET ​/ddms​/v2​/logs​/{logid}

users.datalake.viewers
DELETE ​/ddms​/v2​/logs​/{logid} users.datalake.editors
POST ​/ddms​/v2​/logs users.datalake.editors
GET ​/ddms​/v2​/logs​/{logid}​/versions users.datalake.viewers
GET ​/ddms​/v2​/logs​/{logid}​/versions​/{version} users.datalake.viewers

GET ​/ddms​/v2​/logs​/{logid}​/data

users.datalake.viewers
POST ​/ddms​/v2​/logs​/{logid}​/data users.datalake.editors
POST ​/ddms​/v2​/logs​/{logid}​/upload_data users.datalake.editors
GET ​/ddms​/v2​/logs​/{logid}​/statistics users.datalake.viewers
GET ​/ddms​/v2​/logs​/{logid}​/versions​/{version}​/data users.datalake.viewers
GET ​/ddms​/v2​/logs​/{logid}​/decimated users.datalake.viewers
Log set. GET /ddms/v2/logsets/{logsetid} users.datalake.viewers
DELETE /ddms/v2/logsets/{logsetid} users.datalake.editors
GET /ddms/v2/logsets/{logsetid}/versions users.datalake.viewers
GET /ddms/v2/logsets/{logsetid}/versions/{version} users.datalake.viewers
POST /ddms/v2/logsets users.datalake.editors
Dips

GET ​/ddms​/v2​/dipsets​/{dipsetid}​/dips

users.datalake.viewers

POST ​/ddms​/v2​/dipsets​/{dipsetid}​/dips users.datalake.editors
POST ​/ddms​/v2​/dipsets​/{dipsetid}​/dips​/insert users.datalake.editors
GET ​/ddms​/v2​/dipsets​/{dipsetid}​/dips​/query users.datalake.viewers
GET ​/ddms​/v2​/dipsets​/{dipsetid}​/dips​/{index} users.datalake.viewers
DELETE ​/ddms​/v2​/dipsets​/{dipsetid}​/dips​/{index} users.datalake.editors
PATCH ​/ddms​/v2​/dipsets​/{dipsetid}​/dips​/{index} users.datalake.editors
Dip set. POST ​/ddms​/v2​/dipsets users.datalake.editors
GET ​/ddms​/v2​/dipsets​/{dipsetid}​/versions​/{version} users.datalake.viewers
GET ​/ddms​/v2​/dipsets​/{dipsetid}​/versions users.datalake.viewers
GET ​/ddms​/v2​/dipsets​/{dipsetid} users.datalake.viewers
DELETE ​/ddms​/v2​/dipsets​/{dipsetid} users.datalake.editors