Skip to content

Conversation

@cgwalters
Copy link
Contributor

Add a tasks directory designed primarily for AI agents to execute.
These are called "skills" in Claude Code and "commands" in OpenCode,
but they're simply structured markdown files.

The first task is perform-forge-review.md, which defines an AI-augmented
human-approved code review workflow. The key design principle is that
the AI builds a review in a local JSONL file, which the human can
inspect and edit before submission. The review is submitted as a
pending/draft review, allowing the human to make final edits in the
forge UI before publishing.

Assisted-by: OpenCode (Claude Sonnet 4)
Signed-off-by: Colin Walters walters@verbum.org

@gemini-code-assist
Copy link

Summary of Changes

Hello @cgwalters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request establishes a foundational framework for integrating AI agents into the development workflow, particularly for code reviews. It introduces a dedicated directory for AI tasks and a detailed, human-centric AI-assisted code review process. This process leverages a local JSONL file for AI-generated comments, ensuring human inspection and approval before submission, and is complemented by a new comprehensive REVIEW.md file that standardizes code review expectations for both human and AI reviewers.

Highlights

  • New AI Task Directory: A new tasks directory (common/agents/tasks/) has been introduced to house reusable definitions for AI agents, akin to "skills" or "commands" in other AI tools.
  • AI-Assisted Code Review Workflow: A new task, perform-forge-review.md, defines an AI-augmented code review workflow that prioritizes human oversight.
  • JSONL-based Review Generation: The workflow involves AI generating review comments into a local JSONL file, allowing humans to inspect and edit them before submission as a pending/draft review.
  • Comprehensive Code Review Guidelines: A new REVIEW.md file has been added, providing detailed guidelines for testing, code quality, commit organization, PR descriptions, and architecture, which AI agents are encouraged to utilize during reviews.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@cgwalters
Copy link
Contributor Author

This builds on #63

@gemini-code-assist
Copy link

Summary of Changes

Hello @cgwalters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request establishes a structured approach for AI-driven development by introducing a dedicated tasks directory for AI agents and defining a robust AI-assisted code review workflow. This workflow emphasizes human oversight, allowing developers to review and refine AI-generated comments before they are submitted as draft reviews to various forge platforms. Complementing this, a comprehensive set of code review guidelines has been added to standardize quality and practices across the project.

Highlights

  • New AI Task Directory: A tasks directory has been added under common/agents/ to house reusable task definitions for AI agents, similar to "skills" or "commands" in other AI tools.
  • AI-Assisted Code Review Workflow: Introduced perform-forge-review.md, outlining a workflow where AI generates review comments into a local JSONL file, allowing human inspection and editing before submission as a pending/draft review on platforms like GitHub, GitLab, or Forgejo.
  • Comprehensive Code Review Guidelines: A new REVIEW.md file has been added, providing detailed guidelines for testing, code quality, commit organization, PR descriptions, and architecture, derived from past review feedback.
  • Updated Agent Documentation: AGENTS.md has been updated to reference the new code guidelines in REVIEW.md and the agents/tasks/ directory, guiding agents on how to utilize these resources.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@gemini-code-assist
Copy link

Summary of Changes

Hello @cgwalters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request establishes a foundational framework for integrating AI agents into the development workflow, primarily focusing on code review. It introduces a dedicated directory for AI tasks and a detailed, human-supervised AI-assisted code review process. This process ensures that AI contributions are inspectable and editable by humans before being published as draft reviews across various forge platforms. Additionally, it provides a comprehensive set of code review guidelines to standardize quality and practices within the project.

Highlights

  • New AI Task Directory: Introduced a new common/tasks directory specifically for AI agents to execute structured markdown tasks, analogous to 'skills' or 'commands'.
  • AI-Assisted Code Review Workflow: Added perform-forge-review.md, defining an AI-augmented human-approved code review workflow that supports GitHub, GitLab, and Forgejo.
  • Human Oversight for AI Reviews: The new workflow emphasizes human inspection and editing of AI-generated review comments stored in a local JSONL file before submission as a pending/draft review.
  • Comprehensive Code Review Guidelines: A new REVIEW.md file was added, outlining detailed expectations and standards for testing, code quality, commit organization, PR descriptions, and architecture, including Rust-specific guidance.
  • Agent Documentation Update: The common/AGENTS.md file was updated to reference the new code guidelines in REVIEW.md and the newly introduced agents/tasks/ directory.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new tasks directory for AI agents, starting with a task for performing AI-assisted code reviews. It also adds a comprehensive REVIEW.md file with code review guidelines. The new workflow for code reviews is well-defined, using a local JSONL file for human inspection before submission. My review focuses on improving the clarity of instructions for the AI, and enhancing the robustness, portability, and consistency of the shell scripts provided in the perform-forge-review.md task.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new tasks directory for AI agents, along with a comprehensive perform-forge-review task and general code review guidelines. The documentation is well-structured and detailed. My review focuses on improving the robustness of the shell script examples provided in the task definition. I've identified a few areas where the scripts could fail or behave unexpectedly, particularly with empty input files or comment bodies containing special characters. I've provided suggestions to make these scripts more resilient. I also have a minor suggestion to improve clarity in AGENTS.md.

@cgwalters
Copy link
Contributor Author

OK so I did bootc-dev/bcvk#167 (review) which was a test case for this - but the 4th try the agent incorrectly submitted it as not-draft! The instructions are clear...but it messed up.

In the end this obviously calls out for a dedicated tool (CLI binary, or MCP server, or both).

One thing I think would likely help a lot here with a dedicated tool is having a check where "add review note" requires submitting the matching line text, a lot like how the "file edit" MCP tools tend to operate to increase reliability.

Anyways as far as PoC "code" I'd say this was a success and merits further work!

@cgwalters cgwalters marked this pull request as ready for review December 19, 2025 19:58
@cgwalters
Copy link
Contributor Author

One thing I think would likely help a lot here with a dedicated tool is having a check where "add review note" requires submitting the matching line text, a lot like how the "file edit" MCP tools tend to operate to increase reliability.

Cool, that's done. Now obviously I really dislike shell script but this is currently easy to copy-pasta for folks who want to.

If this grows beyond the PoC phase I'll probably investigate making it something more like a Real Tool in Rust or so.

@cgwalters
Copy link
Contributor Author

/gemini review
(So meta)

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a well-structured framework for AI-driven tasks, starting with a comprehensive code review workflow. The documentation is thorough, and the scripts for interacting with different forges are a great addition. My review focuses on improving the consistency and reusability of the newly added shell scripts. I've suggested changes to align default file paths and make the review attribution more flexible.

@cgwalters
Copy link
Contributor Author

OK I spent way too long on this markdown and shell script today! But I am definitely liking the result so far!

--argjson comments "$COMMENTS" \
'{body: $body, comments: $comments}' | \
gh api "repos/$OWNER/$REPO/pulls/$PR_NUMBER/reviews" \
-X POST --input - 2>&1) || true
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI: The || true here swallows errors from the gh api call. While the result is checked afterwards, if both jq and gh api fail, the combined error output may be confusing. Consider separating the API call from the jq processing or checking the exit status explicitly.

fi

# Extract the actual line content
ACTUAL_LINE=$(sed -n "${LINE}p" "$FILE")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI: (low) Using sed to extract line content works, but per REVIEW.md guidelines about structured data, consider using a tool that provides better error handling. That said, for plain text line extraction this is reasonable.

#!/usr/bin/env bash
# Submit a forge review to Forgejo/Gitea
#
# STATUS: DRAFT/UNTESTED - This script has not been tested against a real Forgejo/Gitea instance.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI: Good to mark this as untested. Consider adding a note in the main perform-forge-review.md about which scripts have been tested in production.

VERSION_RESPONSE=$(curl -s --header "PRIVATE-TOKEN: $TOKEN" \
"$GITLAB_URL/api/v4/projects/$PROJECT_ID/merge_requests/$MR_IID/versions" 2>&1)

VERSION_INFO=$(echo "$VERSION_RESPONSE" | jq '.[0]' 2>/dev/null)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI: Redirecting stderr to /dev/null (2>/dev/null) here suppresses potential JSON parsing errors. If the API returns invalid JSON or an HTML error page, this will silently fail to the null check below. Consider preserving stderr for debugging.

fi

# Parse JSONL: extract metadata and comments separately
if ! ALL_ENTRIES=$(jq -s '. // []' < "$REVIEW_FILE" 2>&1); then
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI: (low) The JSONL parsing logic (lines 57-86) is duplicated across all three submit scripts. Consider extracting this into a shared helper function or script to reduce maintenance burden when the format evolves.


### Step 3: Build the Review

The scripts in this task are located in the `scripts/` subdirectory relative to this
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AI: Question: The task references scripts/forge-review-start.sh as a relative path from the task file location. However, when an agent checks out a different repo that has synced common/, the path would be common/agents/tasks/scripts/. Should the documentation clarify how to locate these scripts, or should agents be expected to discover them?

Add a tasks directory designed primarily for AI agents to execute.
These are called "skills" in Claude Code and "commands" in OpenCode,
but they're simply structured markdown files.

The first task is perform-forge-review.md, which defines an AI-augmented
human-approved code review workflow. The key design principle is that
the AI builds a review in a local JSONL file, which the human can
inspect and edit before submission. The review is submitted as a
pending/draft review, allowing the human to make final edits in the
forge UI before publishing.

Assisted-by: OpenCode (Claude Sonnet 4)
Signed-off-by: Colin Walters <walters@verbum.org>
@cgwalters
Copy link
Contributor Author

If this grows beyond the PoC phase I'll probably investigate making it something more like a Real Tool in Rust or so.

And perhaps arguably this should actually live in our devcontainer.

OK after some research there are some promising projects out there like https://github.com/google/git-appraise - and in theory maybe we could store our draft review using git notes by default or at least in a compatible format? But we can't hard require pushing as notes of course, most people will want the forge-native review model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant