Using the AI Gateway to support Azure OpenAI APIs

The API Connect provides a UI wizard to create AI-aware APIs and products, plus integration with Azure OpenAI to forward requests and manage responses.

The AI Gateway makes it easy for enterprises to manage access to API endpoints used by AI applications. The AI Gateway simplifies the integration of AI into new and existing OpenAPI 3.0 APIs in API Connect to access a set of operations exposed by Azure OpenAI.

API Connect provides a policy that enables your API to send requests to Azure OpenAI. The following Azure OpenAI operations are supported:
  • POST /chat/completions
  • POST /embeddings
  • GET /models
  • GET /models/[model-id]

Prerequisites for using the AI Gateway with Azure OpenAI

Before attempting to use the AI Gateway with Azure OpenAI, complete the following prerequisites:

  • Register with Azure AI Foundry.
  • Create a project in Azure AI Foundry. Upon creation, you will receive the following credentials;
    • API key
    • Resource Id.
  • Deploy a model depending on your desired operation. For example, gpt-4o for chat completions.
    Required user inputs:
    Table 1.
    Parameter Description
    api-key Sent as a request header
    resource_id Serves as part of endpoint of URL that points to your provisioned resources
    deployment_id Name of the project/deployment where the model is deployed
    op-version Operation version must be in YYYY-MM-DD format

Getting started with the AI Gateway and Azure OpenAI

To use the AI Gateway, complete the following steps:

  1. Set up your environment as explained in Prerequisites for using the AI Gateway with Azure OpenAI.

  2. Create an API to use as a reverse-proxy.

  3. Add the Azure OpenAI invoke policy to the API so it can access the Azure OpenAI platform.

  4. Review metrics on the API's performance in the AI usage dashboard, which tracks AI token and model usage.