Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/agents/contribution-checker.agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -120,18 +120,18 @@ Hey @alice 👋 — thanks for working on the auth refactor! Here are a few thin

If you'd like a hand, you can assign this prompt to your coding agent:

````prompt
` `` prompt
Add unit tests for the rate-limiting middleware in src/auth/limiter.ts.
Cover the following scenarios:
1. Request under the limit — should pass through.
2. Request at the limit — should return 429.
3. Limit reset after window expires.
````
` ``
```

## Important

- **Read-only** — NEVER write to the target repository. No comments, no labels, no interactions.
- **Adapt to the project** — every CONTRIBUTING.md is different. Do not assume goals, boundaries, or labels that aren't in the document.
- Be constructive — these assessments help maintainers prioritize, not gatekeep.
- Be deterministic — apply the rules mechanically without hedging.
- Be deterministic — apply the rules mechanically without hedging.
4 changes: 1 addition & 3 deletions docs/contribution-check.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,11 +24,9 @@ You can trigger this workflow manually via workflow_dispatch or let it run on it

The workflow uses a pre-filtering step to intelligently select PRs for evaluation. You can customize:

- **Target repository**: Set a `TARGET_REPOSITORY` repository variable to check PRs in a different repository than where the workflow runs. By default, it checks the repository where the workflow is installed.
- **Schedule frequency**: Change `every 4 hours` to your preferred interval
- **PR filter logic**: Modify the skip conditions in the `github-script` step (e.g., which labels indicate trusted contributors, what constitutes a "small" PR)
- **Batch size**: Adjust the `TARGET` constant (default: 10 PRs per run)
- **Report format**: Customize the report layout rules in the main workflow prompt
- **Skip labels**: Update `SKIP_LABELS` and `SMALL_LABELS` sets to match your repository's labeling conventions

The workflow requires a `CONTRIBUTING.md` file (or `.github/CONTRIBUTING.md` or `docs/CONTRIBUTING.md`) to evaluate PRs against. If no contribution guidelines exist, PRs will be marked with `no-guidelines` quality.

Expand Down
145 changes: 22 additions & 123 deletions workflows/contribution-check.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,5 @@
---
name: "Contribution Check"
description: |
Reviews a batch of open pull requests against the repository's contribution guidelines,
delegating evaluation to a subagent and compiling results into a structured report issue.
Helps maintainers efficiently prioritize community contributions by highlighting PRs that
are ready for review, need work, or fall outside contribution guidelines.

on:
schedule: "every 4 hours"
workflow_dispatch:
Expand All @@ -15,118 +9,13 @@ permissions:
issues: read
pull-requests: read

env:
TARGET_REPOSITORY: ${{ vars.TARGET_REPOSITORY || github.repository }}

tools:
github:
toolsets: [default]
lockdown: false

steps:
- name: Fetch and filter PRs
uses: actions/github-script@v8
with:
script: |
const fs = require('fs');
const [targetOwner, targetRepo] = process.env.GITHUB_REPOSITORY.split('/');

const TARGET = 10;
const MAX_PAGES = 3;
const PER_PAGE = 20;

const SKIP_LABELS = new Set(['maintainer', 'trusted-contributor']);
const SMALL_LABELS = new Set(['size: XS', 'size: S']);

const skipReason = (pr) => {
if (pr.author_association === 'MEMBER' || pr.author_association === 'OWNER') return 'maintainer';
const labels = pr.labels.map(l => l.name);
if (labels.some(l => SKIP_LABELS.has(l))) return 'maintainer';
if (labels.some(l => SMALL_LABELS.has(l))) return 'small';
if (labels.some(l => l.startsWith('close:') || l.startsWith('r: '))) return 'triaged';
return null;
};

const accepted = [];
const allPRs = [];

const sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms));

for (let page = 1; page <= MAX_PAGES && accepted.length < TARGET; page++) {
if (page > 1) await sleep(1000);
core.startGroup(`Page ${page}/${MAX_PAGES} (accepted ${accepted.length}/${TARGET} so far)`);

const batch = await github.rest.pulls.list({
owner: targetOwner,
repo: targetRepo,
state: 'open',
sort: 'created',
direction: 'desc',
per_page: PER_PAGE,
page,
});

const prs = batch.data;
core.info(`Fetched ${prs.length} PRs`);

if (prs.length === 0) {
core.info('No more PRs to fetch');
core.endGroup();
break;
}

for (const pr of prs) {
const labels = pr.labels.map(l => l.name).join(', ');
core.info(` #${pr.number} association=${pr.author_association} labels=[${labels}]`);
}

allPRs.push(...prs);

for (const pr of prs) {
if (accepted.length >= TARGET) break;
if (!skipReason(pr)) accepted.push(pr.number);
}

core.info(`Accepted: ${accepted.length}/${TARGET} | Skipped so far: ${allPRs.length - accepted.length}`);
core.endGroup();
}

const prList = accepted.slice(0, TARGET);
const skipped = allPRs.length - accepted.length;

core.startGroup('Final results');
core.info(`Fetched: ${allPRs.length} | Evaluated: ${accepted.length} | Skipped: ${skipped}`);
core.info(`PR list: ${prList.join(',')}`);
core.endGroup();

// Step summary
const rows = allPRs.map(pr => {
const num = pr.number;
const assoc = pr.author_association;
const labels = pr.labels.map(l => l.name).join(', ');
const reason = skipReason(pr) ?? 'evaluate';
const icon = reason === 'evaluate' ? '✅' : '⏭️';
return `| #${num} | \`${assoc}\` | ${labels} | ${icon} ${reason} |`;
});

const summary = [
'### 🔍 PR Pre-filter Results',
'',
`**Fetched:** ${allPRs.length} | **Evaluated:** ${accepted.length} | **Skipped:** ${skipped}`,
'',
'| PR | Association | Labels | Status |',
'|---|---|---|---|',
...rows,
].join('\n');

await core.summary.addRaw(summary).write();

// Write results to a file the agent can read
const result = {
pr_numbers: prList,
skipped_count: skipped,
evaluated_count: accepted.length,
};
fs.writeFileSync('pr-filter-results.json', JSON.stringify(result, null, 2));
core.info(`Wrote pr-filter-results.json: ${JSON.stringify(result)}`);

safe-outputs:
create-issue:
title-prefix: "[Contribution Check Report]"
Expand All @@ -137,21 +26,27 @@ safe-outputs:
allowed: [spam, needs-work, outdated, lgtm]
max: 4
target: "*"
target-repo: ${{ vars.TARGET_REPOSITORY }}
add-comment:
max: 10
target: "*"
target-repo: ${{ vars.TARGET_REPOSITORY }}
hide-older-comments: true
---

## Target Repository

The target repository is `${{ env.TARGET_REPOSITORY }}`. All PR fetching and subagent dispatch use this value.

## Overview

You are an **orchestrator**. Your job is to dispatch PRs to the `contribution-checker` subagent for evaluation and compile the results into a single report issue.
You are an **orchestrator**. Your job is to dispatch PRs to the `contribution-checker` subagent for evaluation and compile the results into a single report issue in THIS repository (`${{ github.repository }}`).

You do NOT evaluate PRs yourself. You delegate each evaluation to `.github/agents/contribution-checker.agent.md`.

## Pre-filtered PR List

A `pre-agent` step has already queried and filtered PRs. The results are in `pr-filter-results.json` at the workspace root. Read this file first. It contains:
A `pre-agent` step has already queried and filtered PRs from `${{ env.TARGET_REPOSITORY }}`. The results are in `pr-filter-results.json` at the workspace root. Read this file first. It contains:

```json
{
Expand All @@ -165,16 +60,18 @@ If `pr_numbers` is empty, create a report stating no PRs matched the filters and

## Step 1: Dispatch to Subagent

For each PR number in the list, delegate evaluation to the **contribution-checker** subagent (`.github/agents/contribution-checker.agent.md`).
For each PR number in the comma-separated list, delegate evaluation to the **contribution-checker** subagent (`.github/agents/contribution-checker.agent.md`).

### How to dispatch

Call the contribution-checker subagent for each PR with this prompt:

```
Evaluate PR ${{ github.repository }}#<number> against the contribution guidelines.
Evaluate PR ${{ env.TARGET_REPOSITORY }}#<number> against the contribution guidelines.
```

The subagent accepts any `owner/repo#number` reference — the target repo is not hardcoded.

The subagent will return a single JSON object with the verdict and a comment for the contributor.

### Parallelism
Expand All @@ -188,13 +85,13 @@ Gather all returned JSON objects. If a subagent call fails, record the PR with v

### Posting comments

For each PR where the subagent returned a non-empty `comment` field and the quality is NOT `lgtm`, call the `add_comment` safe output tool to post the comment to the PR. Pass the PR number and the comment body from the subagent result.
For each PR where the subagent returned a non-empty `comment` field and the quality is NOT `lgtm`, call the `add_comment` safe output tool to post the comment to the PR in the target repository. Pass the PR number and the comment body from the subagent result. The `add_comment` tool is pre-configured with `target-repo` pointing to the target repository — you do NOT need to specify the repo yourself.

Do NOT post comments to PRs with `lgtm` quality — those are ready for maintainer review and don't need additional feedback.

## Step 2: Compile Report

Create a single issue in this repository. Use the `skipped_count` from `pr-filter-results.json`. Build the report tables from the JSON objects returned by the subagent (use `number`, `title`, `author`, `lines`, and `quality` fields).
Create a single issue in THIS repository. Use the `skipped_count` from `pr-filter-results.json`. Build the report tables from the JSON objects returned by the subagent (use `number`, `title`, `author`, `lines`, and `quality` fields).

Follow the **report layout rules** below — they apply to every report this workflow produces.

Expand Down Expand Up @@ -258,7 +155,7 @@ Evaluated: 4 · Skipped: 10

## Step 3: Label the Report Issue

After creating the report issue, call the `add_labels` safe output tool to apply labels based on the quality signals reported by the subagent. Collect the distinct `quality` values from all returned rows and add each as a label.
After creating the report issue, call the `add_labels` safe output tool to apply labels based on the quality signals reported by the subagent. Collect the distinct `quality` values from all returned rows and add each as a label. The `add_labels` tool is pre-configured with `target-repo` pointing to the target repository.

For example, if the batch contains rows with `lgtm`, `spam`, and `needs-work` quality values, apply all three labels: `lgtm`, `spam`, `needs-work`.

Expand All @@ -269,6 +166,8 @@ If any subagent call failed (❓), also apply `outdated`.
- **You are the orchestrator** — you dispatch and compile. You do NOT run the checklist yourself.
- **PR fetching and filtering is pre-computed** — a `pre-agent` step writes `pr-filter-results.json`. Read it at the start.
- **Subagent does the analysis** — `.github/agents/contribution-checker.agent.md` handles all per-PR evaluation logic.
- **Use safe output tools** — use `add-comment` and `add-labels` safe output tools to post comments and labels to PRs.
- **Read from `${{ env.TARGET_REPOSITORY }}`** — read-only access via GitHub MCP tools.
- **Write to `${{ github.repository }}`** — reports go here as issues.
- **Use safe output tools for target repository interactions** — use `add-comment` and `add-labels` safe output tools to post comments and labels to PRs in the target repository `${{ env.TARGET_REPOSITORY }}`. Never use `gh` CLI or direct API calls for writes.
- Close the previous report issue when creating a new one (`close-older-issues: true`).
- Be constructive in assessments — these reports help maintainers prioritize, not gatekeep.
- Be constructive in assessments — these reports help maintainers prioritize, not gatekeep.