-
Notifications
You must be signed in to change notification settings - Fork 3
common/tasks: New dir, add AI-driven forge review workflow] #64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @cgwalters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request establishes a foundational framework for integrating AI agents into the development workflow, particularly for code reviews. It introduces a dedicated directory for AI tasks and a detailed, human-centric AI-assisted code review process. This process leverages a local JSONL file for AI-generated comments, ensuring human inspection and approval before submission, and is complemented by a new comprehensive Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
This builds on #63 |
Summary of ChangesHello @cgwalters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request establishes a structured approach for AI-driven development by introducing a dedicated Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Summary of ChangesHello @cgwalters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request establishes a foundational framework for integrating AI agents into the development workflow, primarily focusing on code review. It introduces a dedicated directory for AI tasks and a detailed, human-supervised AI-assisted code review process. This process ensures that AI contributions are inspectable and editable by humans before being published as draft reviews across various forge platforms. Additionally, it provides a comprehensive set of code review guidelines to standardize quality and practices within the project. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new tasks directory for AI agents, starting with a task for performing AI-assisted code reviews. It also adds a comprehensive REVIEW.md file with code review guidelines. The new workflow for code reviews is well-defined, using a local JSONL file for human inspection before submission. My review focuses on improving the clarity of instructions for the AI, and enhancing the robustness, portability, and consistency of the shell scripts provided in the perform-forge-review.md task.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new tasks directory for AI agents, along with a comprehensive perform-forge-review task and general code review guidelines. The documentation is well-structured and detailed. My review focuses on improving the robustness of the shell script examples provided in the task definition. I've identified a few areas where the scripts could fail or behave unexpectedly, particularly with empty input files or comment bodies containing special characters. I've provided suggestions to make these scripts more resilient. I also have a minor suggestion to improve clarity in AGENTS.md.
|
OK so I did bootc-dev/bcvk#167 (review) which was a test case for this - but the 4th try the agent incorrectly submitted it as not-draft! The instructions are clear...but it messed up. In the end this obviously calls out for a dedicated tool (CLI binary, or MCP server, or both). One thing I think would likely help a lot here with a dedicated tool is having a check where "add review note" requires submitting the matching line text, a lot like how the "file edit" MCP tools tend to operate to increase reliability. Anyways as far as PoC "code" I'd say this was a success and merits further work! |
3e5c77a to
7e59a96
Compare
Cool, that's done. Now obviously I really dislike shell script but this is currently easy to copy-pasta for folks who want to. If this grows beyond the PoC phase I'll probably investigate making it something more like a Real Tool in Rust or so. |
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a well-structured framework for AI-driven tasks, starting with a comprehensive code review workflow. The documentation is thorough, and the scripts for interacting with different forges are a great addition. My review focuses on improving the consistency and reusability of the newly added shell scripts. I've suggested changes to align default file paths and make the review attribution more flexible.
7e59a96 to
4842fdf
Compare
|
OK I spent way too long on this markdown and shell script today! But I am definitely liking the result so far! |
| --argjson comments "$COMMENTS" \ | ||
| '{body: $body, comments: $comments}' | \ | ||
| gh api "repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews" \ | ||
| -X POST --input - 2>&1) || true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AI: The || true here swallows errors from the gh api call. While the result is checked afterwards, if both jq and gh api fail, the combined error output may be confusing. Consider separating the API call from the jq processing or checking the exit status explicitly.
| fi | ||
|
|
||
| # Extract the actual line content | ||
| ACTUAL_LINE=$(sed -n "${LINE}p" "$FILE") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AI: (low) Using sed to extract line content works, but per REVIEW.md guidelines about structured data, consider using a tool that provides better error handling. That said, for plain text line extraction this is reasonable.
| #!/usr/bin/env bash | ||
| # Submit a forge review to Forgejo/Gitea | ||
| # | ||
| # STATUS: DRAFT/UNTESTED - This script has not been tested against a real Forgejo/Gitea instance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AI: Good to mark this as untested. Consider adding a note in the main perform-forge-review.md about which scripts have been tested in production.
| VERSION_RESPONSE=$(curl -s --header "PRIVATE-TOKEN: $TOKEN" \ | ||
| "$GITLAB_URL/api/v4/projects/$PROJECT_ID/merge_requests/$MR_IID/versions" 2>&1) | ||
|
|
||
| VERSION_INFO=$(echo "$VERSION_RESPONSE" | jq '.[0]' 2>/dev/null) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AI: Redirecting stderr to /dev/null (2>/dev/null) here suppresses potential JSON parsing errors. If the API returns invalid JSON or an HTML error page, this will silently fail to the null check below. Consider preserving stderr for debugging.
| fi | ||
|
|
||
| # Parse JSONL: extract metadata and comments separately | ||
| if ! ALL_ENTRIES=$(jq -s '. // []' < "$REVIEW_FILE" 2>&1); then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AI: (low) The JSONL parsing logic (lines 57-86) is duplicated across all three submit scripts. Consider extracting this into a shared helper function or script to reduce maintenance burden when the format evolves.
|
|
||
| ### Step 3: Build the Review | ||
|
|
||
| The scripts in this task are located in the `scripts/` subdirectory relative to this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AI: Question: The task references scripts/forge-review-start.sh as a relative path from the task file location. However, when an agent checks out a different repo that has synced common/, the path would be common/agents/tasks/scripts/. Should the documentation clarify how to locate these scripts, or should agents be expected to discover them?
Add a tasks directory designed primarily for AI agents to execute. These are called "skills" in Claude Code and "commands" in OpenCode, but they're simply structured markdown files. The first task is perform-forge-review.md, which defines an AI-augmented human-approved code review workflow. The key design principle is that the AI builds a review in a local JSONL file, which the human can inspect and edit before submission. The review is submitted as a pending/draft review, allowing the human to make final edits in the forge UI before publishing. Assisted-by: OpenCode (Claude Sonnet 4) Signed-off-by: Colin Walters <walters@verbum.org>
4842fdf to
d2a83de
Compare
And perhaps arguably this should actually live in our devcontainer. OK after some research there are some promising projects out there like https://github.com/google/git-appraise - and in theory maybe we could store our draft review using git notes by default or at least in a compatible format? But we can't hard require pushing as notes of course, most people will want the forge-native review model. |
Add a tasks directory designed primarily for AI agents to execute.
These are called "skills" in Claude Code and "commands" in OpenCode,
but they're simply structured markdown files.
The first task is perform-forge-review.md, which defines an AI-augmented
human-approved code review workflow. The key design principle is that
the AI builds a review in a local JSONL file, which the human can
inspect and edit before submission. The review is submitted as a
pending/draft review, allowing the human to make final edits in the
forge UI before publishing.
Assisted-by: OpenCode (Claude Sonnet 4)
Signed-off-by: Colin Walters walters@verbum.org