Automate Issue Suggestions With GenerateSuggestion Helper
Hey guys! Today, we're diving into how to automate issue suggestions using the generateSuggestion
helper. This is a super cool way to streamline our workflow and make it easier to resolve GitHub issues. We'll walk through creating the helper function, wiring it into our issue-comment workflow, adding a test, and updating our CI. Let's get started!
1. Create the generateSuggestion
Helper (src/utils/llm.ts
)
First off, we need to create our helper function. This function will use the OpenAI API to generate suggestions based on the issue title and body. Make sure you've got your OpenAI API key ready to go! This is a crucial step in automating the suggestion process. The generateSuggestion
helper function is designed to provide concrete next steps for resolving GitHub issues by leveraging the power of the OpenAI API. This helper function takes the title and body of a GitHub issue as input and returns a summarized suggestion for the next step. Here’s a breakdown of how it works:
-
Import necessary modules:
import { Configuration, OpenAIApi } from "openai";
This line imports the
Configuration
andOpenAIApi
classes from theopenai
package, which are essential for interacting with the OpenAI API. -
Initialize the OpenAI API:
const openai = new OpenAIApi( new Configuration({ apiKey: process.env.OPENAI_API_KEY }) );
Here, we initialize the OpenAI API client with our API key, which is retrieved from the environment variables. It’s super important to keep your API key safe and not expose it in your code. Using environment variables is the way to go!
-
Define the
generateSuggestion
function:export async function generateSuggestion( title: string, body: string ): Promise<string> { // ... }
This function takes the
title
andbody
of the issue as strings and returns aPromise
that resolves to a string, which is the generated suggestion. -
Craft the prompt:
const prompt = `You are a helpful assistant. Summarize the next concrete step to resolve the following GitHub issue.\n\nTitle: ${title}\n\nBody:\n${body}\n\nStep:`;
This is where the magic happens! We create a prompt that tells the OpenAI model what we want it to do. The prompt includes the issue title and body, so the model has enough context to generate a relevant suggestion. Prompt engineering is key to getting good results from language models.
-
Call the OpenAI API:
const response = await openai.createCompletion({ model: "gpt-3.5-turbo", prompt, max_tokens: 150, temperature: 0.2, });
We use the
openai.createCompletion
method to send our prompt to the OpenAI API. We specify the model we want to use (gpt-3.5-turbo
), the prompt, the maximum number of tokens for the response, and the temperature (which controls the randomness of the output). A lower temperature (like 0.2) makes the output more deterministic. -
Extract and return the suggestion:
return response.data.choices[0].text?.trim() ?? "";
We extract the generated text from the API response, trim any leading or trailing whitespace, and return it. If there’s no text in the response, we return an empty string.
Here’s the code snippet:
// src/utils/llm.ts
import { Configuration, OpenAIApi } from "openai";
const openai = new OpenAIApi(
new Configuration({ apiKey: process.env.OPENAI_API_KEY })
);
export async function generateSuggestion(
title: string,
body: string
): Promise<string> {
const prompt = `You are a helpful assistant. Summarize the next concrete step to resolve the following GitHub issue.\n\nTitle: ${title}\n\nBody:\n${body}\n\nStep:`;
const response = await openai.createCompletion({
model: "gpt-3.5-turbo",
prompt,
max_tokens: 150,
temperature: 0.2,
});
return response.data.choices[0].text?.trim() ?? "";
}
This helper function is a game-changer for automating issue resolution suggestions. It encapsulates the logic for interacting with the OpenAI API, making it easy to integrate into other parts of our codebase.
2. Wire It into the Issue-Comment Workflow
Now that we've got our helper function, we need to wire it into our issue-comment workflow. This means integrating it into the part of our code that handles new issues and generates comments. Typically, this might be in a file like src/workflows/issueComment.ts
or wherever you manage your issue-handling logic. Integrating the generateSuggestion
helper into the issue-comment workflow involves several steps. This ensures that when a new issue is created, our system automatically generates a suggestion for the next step and posts it as a comment. Here’s how we can do it:
-
Import the
generateSuggestion
function:import { generateSuggestion } from "../utils/llm";
First, we need to import the
generateSuggestion
function we created earlier. This allows us to use it within our issue-comment workflow. -
Locate the issue-handling function:
// inside your handler that receives the issue payload async function handleNewIssue(issue) { // ... }
Find the function that handles new issues. This is where we’ll add our logic to generate and post the suggestion. The function will receive the issue payload, which contains all the information about the issue, including the title and body.
-
Extract issue details:
const { title, body } = issue;
Extract the title and body from the issue object. These are the inputs we’ll use for our
generateSuggestion
function. -
Generate the suggestion:
let suggestion = ""; try { suggestion = await generateSuggestion(title, body); } catch (err) { console.error("LLM suggestion failed:", err); // fallback to a generic message if needed suggestion = "⚠️ Unable to generate a suggestion at this time."; }
This is the core part. We call the
generateSuggestion
function with the title and body and store the result in a variable calledsuggestion
. We also wrap this in atry...catch
block to handle any errors that might occur (e.g., if the OpenAI API is unavailable). If an error occurs, we fall back to a generic message. -
Construct the comment body:
const commentBody = `## Suggested next step\n${suggestion}`;
We create the body of the comment that will be posted on the issue. This includes a heading and the generated suggestion. Using markdown in the comment body helps make it more readable.
-
Post the comment:
await octokit.issues.createComment({ owner: repoOwner, repo: repoName, issue_number: issue.number, body: commentBody, });
We use the
octokit.issues.createComment
method to post the comment to the issue. This requires the owner, repository name, issue number, and the comment body. Make sure you haveoctokit
configured to interact with the GitHub API.
Here’s the code snippet:
import { generateSuggestion } from "../utils/llm";
// inside your handler that receives the issue payload
async function handleNewIssue(issue) {
const { title, body } = issue;
// 1️⃣ Get the AI‑generated next step
let suggestion = "";
try {
suggestion = await generateSuggestion(title, body);
} catch (err) {
console.error("LLM suggestion failed:", err);
// fallback to a generic message if needed
suggestion = "⚠️ Unable to generate a suggestion at this time.";
}
// 2️⃣ Post the comment
const commentBody = `## Suggested next step\n${suggestion}`;
await octokit.issues.createComment({
owner: repoOwner,
repo: repoName,
issue_number: issue.number,
body: commentBody,
});
}
By integrating the generateSuggestion
helper into our issue-comment workflow, we’ve automated the process of suggesting next steps for new issues. This can significantly improve our response time and help guide contributors on how to address issues effectively.
Here's how you might wire it into your workflow (e.g., in src/workflows/issueComment.ts
):
import { generateSuggestion } from "../utils/llm";
// inside your handler that receives the issue payload
async function handleNewIssue(issue) {
const { title, body } = issue;
// 1️⃣ Get the AI‑generated next step
let suggestion = "";
try {
suggestion = await generateSuggestion(title, body);
} catch (err) {
console.error("LLM suggestion failed:", err);
// fallback to a generic message if needed
suggestion = "⚠️ Unable to generate a suggestion at this time.";
}
// 2️⃣ Post the comment
const commentBody = `## Suggested next step\n${suggestion}`;
await octokit.issues.createComment({
owner: repoOwner,
repo: repoName,
issue_number: issue.number,
body: commentBody,
});
}
In this snippet, we're importing the generateSuggestion
function and using it within our handleNewIssue
function. We're also including error handling, which is super important for production code. If the suggestion generation fails, we'll post a fallback message.
3. Add a Small Test
Testing is crucial, guys! We need to make sure our generateSuggestion
function is working as expected. Let's add a small test in src/utils/__tests__/llm.test.ts
to verify that the function returns a non-empty string for a sample issue. Adding a small test for the generateSuggestion
function is a great way to ensure it’s working correctly. This test will verify that the function returns a non-empty string for a sample issue, which indicates that it’s successfully generating suggestions. Here’s how you can add a test in src/utils/__tests__/llm.test.ts
:
-
Import necessary modules:
import { generateSuggestion } from "../llm";
Import the
generateSuggestion
function that you want to test. -
Describe the test suite:
describe("generateSuggestion", () => { // ... });
Use the
describe
function to create a test suite for thegenerateSuggestion
function. This helps organize your tests and provides a clear context for what you’re testing. -
Define the test case:
it("should return a non-empty string for a sample issue", async () => { // ... });
Use the
it
function to define a specific test case. The description should clearly state what the test is supposed to do. -
Prepare the test data:
const title = "Sample Issue Title"; const body = "Sample issue body with some details.";
Create sample data for the issue title and body. This data will be used as input for the
generateSuggestion
function. -
Call the function and assert the result:
const suggestion = await generateSuggestion(title, body); expect(suggestion).toBeTruthy();
Call the
generateSuggestion
function with the sample data and use an assertion to check the result. In this case, we’re usingexpect(suggestion).toBeTruthy()
to ensure that the suggestion is a non-empty string.
Here’s the code snippet:
import { generateSuggestion } from "../llm";
describe("generateSuggestion", () => {
it("should return a non-empty string for a sample issue", async () => {
const title = "Sample Issue Title";
const body = "Sample issue body with some details.";
const suggestion = await generateSuggestion(title, body);
expect(suggestion).toBeTruthy();
});
});
This test case ensures that our generateSuggestion
function is working correctly by verifying that it returns a non-empty string for a sample issue. This is a basic but essential test to ensure the reliability of our automated suggestion system.
Here's an example:
// src/utils/__tests__/llm.test.ts
import { generateSuggestion } from "../llm";
describe("generateSuggestion", () => {
it("should return a non-empty string for a sample issue", async () => {
const title = "Sample Issue Title";
const body = "Sample issue body with some details.";
const suggestion = await generateSuggestion(title, body);
expect(suggestion).toBeTruthy();
});
});
This test is simple but effective. It checks that our function returns something when given a sample issue. We're using Jest here, but you can use any testing framework you prefer.
4. Update CI / Lint
Next up, we need to make sure our Continuous Integration (CI) environment is set up correctly. This involves ensuring that the OPENAI_API_KEY
is defined in the CI environment or mocking the OpenAI client in tests. We also need to add the new file to the tsconfig.json
include list if needed. Updating CI and linting configurations is crucial to ensure that our new code integrates smoothly into our existing development workflow. This involves several steps, including defining the OPENAI_API_KEY
in the CI environment, mocking the OpenAI client in tests, and updating the tsconfig.json
file. Here’s a detailed breakdown:
-
Ensure
OPENAI_API_KEY
is defined in the CI environment:Our
generateSuggestion
function relies on the OpenAI API, which requires an API key. We need to make sure this key is available in our CI environment so that our tests and automated processes can run correctly. This typically involves setting an environment variable in your CI provider’s settings. This is very important for security reasons.- Why? Storing API keys directly in the code is a security risk. Environment variables allow us to keep sensitive information separate from our codebase.
- How? Check your CI provider’s documentation for instructions on setting environment variables. Common CI providers include GitHub Actions, Travis CI, CircleCI, and Jenkins.
-
Mock the OpenAI client in tests (if necessary):
If you don’t want to make actual API calls during testing (which can be costly and slow), you can mock the OpenAI client. Mocking allows you to simulate the behavior of the API without actually calling it. This can make your tests faster and more reliable. Mocking the OpenAI client in tests is a common practice to avoid making actual API calls during testing, which can be costly and slow. Instead, we simulate the behavior of the API.
-
Why? Mocking makes tests faster, more reliable, and prevents unexpected charges from API usage during testing.
-
How? You can use testing libraries like Jest or Mocha with libraries like
jest-mock
orsinon
to create mocks. Here’s an example using Jest:jest.mock("openai", () => ({ OpenAIApi: jest.fn().mockImplementation(() => ({ createCompletion: jest.fn().mockResolvedValue({ data: { choices: [{ text: "Mock suggestion" }], }, }), })), Configuration: jest.fn().mockImplementation(() => ({})), }));
-
-
Add the new file to the
tsconfig.json
include list (if needed):If you’re using TypeScript, you need to make sure your new file (
src/utils/llm.ts
) is included in yourtsconfig.json
file. This tells the TypeScript compiler to include the file in the compilation process. Adding the new file (src/utils/llm.ts
) to thetsconfig.json
include list is essential to ensure that the TypeScript compiler includes the file in the compilation process.- Why? TypeScript needs to know about all the files in your project to compile them correctly. If a file is not included, TypeScript won’t compile it, and you might encounter errors.
- How? Open your
tsconfig.json
file and check theinclude
array. Add `