Code generation workflow

The following user flow diagram depicts two scenarios. In the first scenario, when the user codes in the editor, the extension sends requests to the backend service to prompt the LLM. Based on the entered code and the surrounding code, the LLM returns suggestions to the editor. Similarly in the second scenario, the user enters a chat prompt, the extension sends the prompt to the backend service, which then prompts the LLM. Based on the content in the chat, the LLM returns a response back to the chat interface.

Figure 1. Code generation user flow
Code generation user flow