Editing the document

When creating an AsyncAPI document for the API of your Kafka event source, clicking Edit API in the final step opens the AsyncAPI editor where you can add more details to the document. The editor splits the document into two sections - Async API for the description of your Kafka event source, and Gateway for the configuration provided to an Event Gateway Service when the API is published for application developers to use.

The following sections describe items relevant to Kafka event sources you might want to add to your document. Additional details will help application developers learn more about your API in the Developer Portal. It will also detail the available Event Gateway Service configuration options available.

If you have previously documented an API, you can view and make changes as follows:

  1. Log in to the Event Endpoint Management instance.

  2. On the navigation pane, click Home, and click the Develop APIs and products tile.

  3. On the APIs tab, click the title of the API document you want to change.

  4. Make changes as required and click Save.

Making changes to your AsyncAPI document

After creating an AsyncAPI document that describes your event source, you can subsequently edit it as required.

The Async API tab in the editor provides a form view and a raw source code view of an AsyncAPI document. You can use both views to edit any field. For information about the full set of AsyncAPI fields and their usage, see the specification.

The following reference highlights how and where to modify any previously provided values, as well as detailing how to define additional useful values within the editor.

Note: AsyncAPI documents allow referencing and reuse of common values by making use of JSON Schema $ref values. Where applicable, the editor will offer a Reference option for objects and values that can be defined through a reference. In addition, you can also add references manually in the editor's source view.

API metadata

Use the General > Info section in the editor to add metadata about the API. In particular, use the Contact section to provide information for application developers to contact the API owner.

Use the General > External Documentation section to provide a URL containing additional documentation for the API.

Kafka bootstrap servers

The addresses for the Kafka brokers that the event source uses as bootstrap servers are represented as servers in an AsyncAPI document. Broker details are set in the General > Servers section in the editor. You can add or remove brokers by clicking the plus icon or bin icon, respectively, in the Servers section.

Note: Each individual broker provided in the bootstrap servers list will be represented by its own server entry.

Important: If this API is enforced by an Event Gateway Service at runtime, server and protocol values are superseded when published to the Developer Portal as described in APIs and enforcement. In addition, ensure you update the Kafka bootstrap server settings in the invoke policy fields if you change server values here.

A server is made up of multiple fields. The following table shows the key fields for describing a Kafka cluster:

Field name Expected Value
URL The hostname and port of a Kafka broker that is used as a bootstrap server.

Indicates if transport security (TLS) should be used to connect to the cluster. Expected values for Kafka clusters are as follows:

  • kafka for a cluster that requires no transport security for a client to connect.
  • kafka-secure for a cluster that requires clients to use transport security when connecting to the cluster.
Protocol Version The Kafka protocol version used by the cluster.
Security Requirements If a cluster requires authentication, a Security Scheme is referenced in this field. The Security Scheme describes the authentication type. For more information, see authentication.

Important: Ensure that the Protocol and Security Requirements values are the same for all server entries that describe the Kafka cluster.


Client authentication with a Kafka cluster uses either SSL or SASL mechanisms. If your Kafka cluster requires authentication when connecting, describe the mechanism used for authentication, and ensure it is referenced by all brokers (servers) in your AsyncAPI document.

Authentication settings in AsyncAPI are defined in a security scheme. Each security scheme has a unique name, and defines the type of security used. This might include supporting configuration details depending on the type of security that is used. For information about the full set of supported security scheme types, including required configuration values, see the AsyncAPI specification.

To create security schemes, go to Components > Security Schemes in the form view, or see the securitySchemes section in the source view.

For Kafka use cases, use userPassword to represent a SASL authentication mechanism, or use X509 for an SSL authentication mechanism. It is also helpful to add a description to any defined security scheme to provide additional details and context to end users to enable them to connect to your cluster.

Important: When enforced by the Event Gateway Service, any security scheme defined and associated with your servers must be of type userPassword.

After a security scheme is created, ensure you add the scheme to all servers (brokers) described in the AsyncAPI document. You can add the details by going to General > Servers, opening each broker setting, and selecting the security scheme you created previously under Security Requirements.

You can also add the scheme to a server in the source view as follows:

    url: myBroker.com:9092
    protocol: kafka-secure
      - clusterAuth: []

Where clusterAuth is the name of the security scheme to use and the value is an empty array.


The Kafka topics that your event source produces events to are represented as a channel in an AsyncAPI document. Topic details are defined in the Channels section in the editor.

In an AsyncAPI document a channel contains two types of operations that can be performed by a client:

  • publish: send a message to the channel and the application will process the event

  • subscribe: subscribe to the channel to see events produced by the application

When documenting the API for an event source, you will only need to write a subscribe operation as an event source does not consume events. Application developers can use the details of the subscribe operation to configure their applications to receive events from the channel.

Important: If your API is enforced by an Event Gateway Service at runtime, only subscribe operations are permitted by the gateway.

Operations contain one or more messages. A message contains details about the structure of an event.

By clicking the plus icon next to Channels, you can add additional topics that are produced to by your event source. You can remove topics by clicking the bin icon next to their name under Channel.

Important: An AsyncAPI document must contain at least one channel.

You can also describe client metadata such as groupId and clientId by adding an AsyncAPI Kafka binding to a operation object. You can define a reusable Operation Binding in Bindings > Operation Binding and $ref as the binding, or as a part of the message itself. For more information about the Kafka Operation bindings and expected values, see the AsyncAPI GitHub page.

Important: If this API is enforced by an Event Gateway Service at runtime, any defined clientId Kafka Operation Binding is superseded when published to the Developer Portal.

Event content

Records produced to a Kafka topic by your event source are described in an AsyncAPI document as messages.

You can define messages either as a part of a channel in Channels > ChannelName > OperationType > Message, or as a component in Components > Messages.

If a message is reused across multiple topics, it is recommended to define the message as a component, and to reference the message where required with $ref.

The following table shows the key fields for describing a message:

Field name Expected Value
payload The schema describing the message structure. For Kafka use cases, we recommend using the Apache Avro schema format. The content is a JSON or YAML object. Note: This is where any uploaded schema is saved if you added one when creating the AsyncAPI document.
schemaFormat The format of the schema stored in this document. See the AsyncAPI schemaFormat documentation for a complete list of accepted values.

You can also describe a record's key by adding an AsyncAPI Kafka binding to a message object. You can define a reusable Message Binding in Bindings > Message Binding and $ref as the binding, or as a part of the message itself. For example:

    schemaFormat: ...
    payload: ...
          type: string
          enum: ['keyOne', 'keyTwo']
        bindingVersion: '0.1.0'

For more information about the Kafka Message bindings and expected values, see the AsyncAPI GitHub page.

Note: You can only edit the payload value on a message and any AsyncAPI binding object in the source view.

Configuring the Event Gateway Service

When an enforced API is published, associated Event Gateway Services are automatically configured to provide enforced self-service access to your Kafka event source through the Developer Portal. This configuration is defined in a Policy, which is defined as part of your AsyncAPI document.

You can edit these Policies in the Gateway tab, where you can also configure additional gateway service settings, enforcement details, and Developer Portal behavior. You can use both form and raw source code views to edit any field.

The following sections describe the available configuration options.

Invoke Policy

The Invoke Policy is used by the Event Gateway Service to forward traffic to your Kafka event source when a client connects. When a client authenticates with the Event Gateway Service, the gateway service uses the following information to connect to and authenticate with your Kafka event source. Depending on the Security Protocol you select, you might need to provide additional details.

You can edit these settings in the Invoke section of the editor:

  • Bootstrap servers: The bootstrap server address of your cluster. This is filled in when the AsyncAPI document is created, but will need to be manually maintained afterward.

  • Security protocol: Select how the gateway authenticates and connects to your Kafka cluster. This is the same setting as the security.protocol value set for clients that connect to the cluster. Depending on the selection, you are asked for different details.

    • PLAINTEXT: No authentication and no transport security. No further details are required. Traffic is not encrypted between the Kafka cluster and the gateway.

    • SASL_PLAINTEXT: Enter the SASL username and password for authentication. No transport security details are required. Traffic is not encrypted between the Kafka cluster and the gateway.

    • SASL_SSL: Enter the SASL username and password for authentication. Transport security is used. All traffic is encrypted between the Kafka cluster and the gateway.

    • SSL: No authentication is required. Transport security is used. All traffic is encrypted between the Kafka cluster and the gateway.

      Note: Consider the level of permissions you provide to the gateway for accessing your Kafka cluster. Set credentials for the gateway that provide the minimum access to your Kafka cluster, for example, read access only.

  • SASL mechanism: If using SASL_PLAINTEXT or SASL_SSL as your Security Protocol, set this to PLAIN.

  • Username: The SASL username to be used to authenticate the Event Gateway Service with your Kafka cluster.

  • Password: The SASL password to be used to authenticate the Event Gateway Service with your Kafka cluster.

  • Transport CA certificate: If using SSL or SASL_SSL as your Security Protocol, this certificate is used to provide transport security between the Event Gateway Service and your Kafka cluster.

Note: When creating an AsyncAPI document, you can provide the credentials required to connect an Event Gateway Service to your Kafka event source. These values are stored in the Invoke policy.

Gateway, portal, and publishing configuration

The following settings are set in the Gateway and portal settings section of the editor:

  • Enforced - Sets whether an Event Gateway Service enforces this API's policies. If set to false, ensure you remove the gateway key/value pair in the source view to allow this API to be published to a Developer Portal.

  • Gateway - Sets the gateway service type that enforces this API. For enforced, the valid value is event-gateway, and for unenforced, do not include a gateway key/value pair in the source view.

  • Phase - Sets a tag to show the lifecycle stage of the API. You can set to display this information in the Developer Portal to indicate the API's maturity. The following are possible values:

    • Realized (default) - The API is in the implementation phase.

    • Identified - The API is in the early conceptual phase and is neither fully designed nor implemented.

    • Specified - The API has been fully designed and passed an internal milestone, but has not yet been implemented.

  • Testable - Setting to enable the Developer Portal test tool. Note: This feature is not available for AsyncAPIs in the Developer Portal, and is ignored.

  • Properties - You can define reusable values across your API definition with API properties. Provide a name, value (which can be base64 encoded), and an optional description. The value can then be referenced in your document as follows (where name is 'my-description-text'): description: $(my-description-text)

  • Catalog properties - Similar to API properties, you can also define reusable values which are specific to the Catalog. Define the Catalog you want your property to apply to, and then define the name and value for the property. The value can then be referenced in your document as follows (where name is 'my-catalog-description'): description: $(my-catalog-description) Note: If an API and Catalog property have the same name, the Catalog property value takes precedence.

API document and enforcement

An API can be either enforced or unenforced. An enforced API means a gateway service is used to enforce a set of defined Policies on any client using that API at runtime.

For these Policies to take effect, and to allow self-service access to the API in the Developer Portal, enforced APIs update and supersede some values defined in your API when published. This is to ensure that the gateway services are referenced in place of your Kafka event source, and ensures the gateway service can service API requests successfully.

The following aspects of your AsyncAPI document are updated on publish if enforced:

  • Servers and Security Schemes:

    • The url values provided in the server objects are updated with entries referencing available Event Gateway Services for the Catalog where the API is being published to.

    • The protocol is updated to kafka-secure as the Event Gateway Service requires clients to connect and communicate through TLS.

  • Kafka ClientId values: Any clientId value defined in a subscribe operation binding will have its value updated to a unique identifier relating to this API.

Note: Changes to the document are generated and only published to the Developer Portal, and not made to your AsyncAPI document.

Important: If an API is unenforced and published to a Developer Portal, your AsyncAPI document is published as defined, without any Policy configuration details.

Additional conditions required for Event Gateway Service enforcement

For an AsyncAPI to be enforceable on an Event Gateway Service, the API must also satisfy the following additional semantic conditions in addition to the AsyncAPI 2 specification:

  • There must be at least one server object.

  • The only server protocol values allowed are kafka and kafka-secure.

  • The server objects must reference a security scheme, which is of type userPassword. Clients connecting to the gateway service have to provide credentials generated for them by the Developer Portal. At this time, only SASL PLAIN, represented by type userPassword credentials is supported.

  • All server objects must have the same protocol and reference the same security scheme configuration.