Prompts library

The API Agent prompts library provides a comprehensive guide on using prompts to perform various tasks, such as generating test cases, running test cases, listing test suites, creating OpenAPI specifications, generating code, and chaining prompts.

Guidelines for using prompts

To achieve accurate and efficient results, use the following guidelines when entering prompts:
  1. Avoid the words or tone that convey sarcasm or humor, as they can be misinterpreted.
  2. Use active voice, correct grammar and spelling, and keep your request clear and specific.
  3. Provide clear context and relevant keywords, and avoid vague or domain-specific terms. For example, generate test or build api.
  4. If the system is unable to understand your request, it offers suggestions for the requests.

Prerequisites

Use the API Agent plug-in to perform tasks concerning API Connect.

  • Ensure that you are logged in to API Agent. For login and operation instructions, see Getting started.
  • You must have valid permissions to use the API Agent. For more information, see API Agent user roles.
  • Ensure that you have the same permissions through the API Connect’s API Manager interface. Even if you have permission to use the API Agent plug-in, actions fail if you do not have the necessary permissions to perform the same tasks through the API Manager UI.
    Note: When performing tasks related to the API Connect catalog, the API Agent plug-in currently supports operations on the Sandbox catalog. As a result, the catalog name is not explicitly stated in the tasks.

Generating test cases

Check that the API is published before generating test cases. For more information, see Creating and publishing draft API.

To generate a test case, complete the following steps:
  1. In Visual Studio Code, enter the following prompt:
    Generate test cases for <pulbished api name>
    Note: The <published api name> must contain the name of the API. Do not include the version information. For example, use petstore, do not use petstore_1.0.0.
  2. Click Send. API Agent displays the proposed plan along with the proper arguments.
  3. Click Start. API Agent connects to smart generation feature of test module, which creates test cases iteratively for the first 15 endpoints in the available API file. The test cases are stored in a test suite suffixed with AA_Test_Suite along with api name (AA_Test_Suite_<api_name>). After generation is completed, API Agent displays the status message indicating the number of test cases created along with the test suite name where it is created.

You can find the generated test cases in the Test module of the API Manager console.

Running test cases

To run test cases, complete the following steps:
  1. In Visual Studio Code, enter the following prompt:
    Run test cases for <pulbished api name>
    Note: The <published api name> must contain the name of the API. Do not combine the version information. For example, petstore.
  2. Click Send. API Agent displays the proposed plan along with the proper arguments.
  3. Click Start.
  4. The Smart Generation feature of the API Connect test module generates the data for input parameters to run the test. After initiating the runs successfully, the API Agent displays the status message and the URLs of test results in Visual Studio Code.

  5. Click the hyperlink to view the results of the individual test cases in the API Manager console.

  6. Optional: To delete the test case, enter the following prompt:
    delete test case <test_name> in suite <suite_name>

Listing test suites

To list test suites corresponding to a pOrgId, complete the following steps:
  1. Enter the following prompt:
    List test suites
  2. Click Send. API Agent displays the list of test suits corresponding to a pOrgId.
  3. Optional: To delete the test suite, enter the following prompt:
    delete test suite <suite_name>

Listing test cases

To list test cases corresponding to a test suite, complete the following steps:
  1. Ensure that the test suites are available. If not available, follow the steps mentioned in the Generating test cases.
  2. To identify the available test suites, see the List test suites section.
  3. Enter the following prompt:
    List test cases for test suite <test suite name>
  4. Click Send. API Agent displays the list of test cases corresponding to a test suite.

Creating an OpenAPI

  1. To create an API specification for orders table, enter the following prompt:
    create openapi for orders table
    or
    generate api spec for orders table
    Note: There should not be any other table with the same name in any other schema within the database because it might lead to confusion or errors.
  2. To create an API specification for orders table, enter the following prompt:
    create openapi for orders table with schema_name as purchase_order
    or
    create openapi for orders table in purchase_order schema
  3. To create an API specification for orders table by using object name and without schema name, enter the following prompt:
    create openapi for orders table with object_name as purchase_order_orders
  4. To create an API specification for orders table by using source ID, enter the following prompt:
    create openapi for orders table with source_id as
        6c115670-a7e1-11ef-b576-fa5e88d5392d
  5. To extract the OpenAPI specification for the table name and transforms to code generator-specific format, enter the following prompt:
    Extract the openapi spec for the given table name and transforms to Code generator specific format
  6. To create an API specification for orders table by using the source ID, run the following command:
    create api for orders table with source_id as 6c115670-a7e1-11ef-b576-fa5e88d5392d

Generating code

Example:
  1. To generate code for a simple server type, you can enter one of the following prompts:
    Generate some Python FastAPI server code based on the following openapi @my_file.yaml
    or
    Please build FastAPI backend code using the OpenAPI file @api_spec.yaml
    or
    Can you develop a Python FastAPI server using the openapi specification provided in @service_definition.json
  2. To generate code for a database CRUD server type, you can enter one of the following prompts:
    Generate some Python FastAPI server code based on the following openapi @my_file.yaml that handles database operations
    or
    Please build FastAPI backend code, including some database integration, using the OpenAPI file @api_spec.yaml
    or
    Can you develop a database driven Python FastAPI server using the openapi specification provided in @service_definition.json
Note: To know more information about usage of prompts using OpenAPI Python Generator tool, see OpenAPI Python Generator tool.

Sequential prompts

You can use API Agent to chain multiple tools together in a single prompt and then run one after another. The following are some examples of sequential prompts:
  1. To create and run test cases, enter the following prompt:
    create test cases for the petstore api and run the test cases

    Screenshot of create and run test cases

  2. To list draft or published APIs or products, enter the following prompt:
    list draft apis, draft products, published apis and published products
    list published apis

    Screenshot of list of draft or published APIs

  3. To create a draft API and generate FastAPI server code, enter the following prompt:
    create draft api from file @openapi.yaml and generate fastapi server code from the same api file
    create draft api using @file_name

    Screenshot of code generator option

  4. To review security of an API, enter the following prompt:
    review security of api API_NAME:VERSION
  5. To review security and validate an API, enter the following prompts:
    review security of api API_NAME:VERSION
    validate api API_NAME:VERSION

To know more information about chaining multiple tools together, see APIConnect Task tool.