From 39313b967fddf83448a6e65fd61207ba2d8a2a32 Mon Sep 17 00:00:00 2001 From: Alan Treadway Date: Fri, 20 Feb 2026 13:05:58 +0000 Subject: [PATCH 01/17] git subrepo pull external/ag-shared subrepo: subdir: "external/ag-shared" merged: "032a0b07dce" upstream: origin: "https://github.com/ag-grid/ag-shared.git" branch: "latest" commit: "032a0b07dce" git-subrepo: version: "0.4.9" origin: "https://github.com/Homebrew/brew" commit: "6e88ffd4d2" --- external/ag-shared/.gitrepo | 4 +- .../prompts/commands/context/remember.md | 98 ----- .../commands/git/branch-save-context.md | 109 ----- .../ag-shared/prompts/commands/plan/review.md | 2 +- .../ag-shared/prompts/commands/pr/split.md | 18 +- .../{git/branch-load-context.md => recall.md} | 61 ++- .../ag-shared/prompts/commands/remember.md | 216 ++++++++++ .../ag-shared/prompts/guides/code-quality.md | 10 - .../skills/dev-server/_dev-server-core.md | 25 ++ .../install-for-cloud/install-for-cloud.sh | 12 +- .../scripts/sonar/sync-sonar-issues.ts | 388 ++++++++++++++++++ ...hWatch.config.js => studioWatch.config.js} | 19 +- external/ag-shared/scripts/watch/watch.js | 6 +- 13 files changed, 693 insertions(+), 275 deletions(-) delete mode 100644 external/ag-shared/prompts/commands/context/remember.md delete mode 100644 external/ag-shared/prompts/commands/git/branch-save-context.md rename external/ag-shared/prompts/commands/{git/branch-load-context.md => recall.md} (51%) create mode 100644 external/ag-shared/prompts/commands/remember.md create mode 100644 external/ag-shared/prompts/skills/dev-server/_dev-server-core.md create mode 100644 external/ag-shared/scripts/sonar/sync-sonar-issues.ts rename external/ag-shared/scripts/watch/{dashWatch.config.js => studioWatch.config.js} (64%) diff --git a/external/ag-shared/.gitrepo b/external/ag-shared/.gitrepo index 17de75a9a89..a975d199774 100644 --- a/external/ag-shared/.gitrepo +++ b/external/ag-shared/.gitrepo @@ -6,7 +6,7 @@ [subrepo] remote = https://github.com/ag-grid/ag-shared.git branch = latest - commit = ae9edea620feed79c99d07134b17f3d12f218c78 - parent = 4f8bfc807bc0f0bda0150626c53a43c4369c8fcb + commit = 032a0b07dce704562581073e69edffbe6319c956 + parent = 74a2de9a165e71ec9ef943721a1d8236a7faeadc method = rebase cmdver = 0.4.9 diff --git a/external/ag-shared/prompts/commands/context/remember.md b/external/ag-shared/prompts/commands/context/remember.md deleted file mode 100644 index 8c5682339a2..00000000000 --- a/external/ag-shared/prompts/commands/context/remember.md +++ /dev/null @@ -1,98 +0,0 @@ ---- -targets: ['*'] -description: 'Extract and persist learnings from conversations as agentic memory' -fork: true ---- - -# Remember - -Extract decisions, patterns, and learnings from the current conversation and persist them as agentic memory. - -## When to Use - -- After resolving a non-obvious issue with a specific approach -- When discovering a pattern that should be reused -- When user corrects agent behaviour or preferences -- After clarifying how existing rules should be interpreted - -## Memory Extraction - -Review the conversation to identify: - -1. **Decisions** - Specific choices made (e.g., "use X approach instead of Y") -2. **Corrections** - Mistakes caught and how to avoid them -3. **Patterns** - Reusable approaches that worked well -4. **Preferences** - User/project preferences revealed -5. **Clarifications** - Ambiguous rules made concrete - -For each candidate, extract: -- The core learning (1-2 sentences) -- Context where it applies -- Why it matters - -## Classification - -Determine the best location for each memory: - -| Type | Location | When | -|------|----------|------| -| Domain rule | `.rulesync/rules/{domain}.md` | Topic-specific guidance | -| Command enhancement | `.rulesync/commands/{cmd}.md` | Workflow-specific | -| Skill update | `external/prompts/skills/{skill}/` | Skill-scoped learning | -| New rule file | `.rulesync/rules/{new}.md` | Distinct topic, 3+ guidelines | - -**Constraints**: -- **Never update root files directly** - `CLAUDE.md`, `AGENTS.md`, and files with `root: true` frontmatter are managed separately. If a memory belongs there, recommend creating/updating a non-root rule that gets referenced instead. -- **Prefer existing files** - only create new files when the topic is clearly distinct and has sufficient content. - -## Interactive Presentation - -For each memory candidate, present to user: - -### Memory N of M - -**Learning**: [The extracted insight] - -**Recommended location**: `path/to/file.md` → Section Name - -**Options**: -1. āœ… Add to recommended location -2. šŸ“ Add to different location (specify) -3. āœļø Rephrase the learning -4. ā­ļø Skip this memory - -Use AskUserQuestion with these options. Wait for user response before proceeding. - -## Execution - -For approved memories: - -1. **Read** the target file to understand current structure -2. **Locate** the appropriate section (or create if needed) -3. **Format** the memory to match file conventions: - - Rules: Use `-` bullet points, match existing tone - - Commands: Integrate into relevant phase/section -4. **Write** the update using Edit tool -5. **Confirm** the change to user - -## Output - -After processing all memories, summarise: - -``` -## Memory Update Summary - -Added: N memories -Skipped: M memories -Files modified: -- path/to/file1.md (section updated) -- path/to/file2.md (new section added) -``` - -## Constraints - -- **Never update root files** - Do not modify `CLAUDE.md`, `AGENTS.md`, or any file with `root: true` in frontmatter. These are managed separately. Instead, create or update a non-root rule file that can be referenced. -- Keep memories atomic - one concept per update -- Match the writing style of the target file -- If unsure about location, ask user rather than guess -- Memories should be actionable, not just observations diff --git a/external/ag-shared/prompts/commands/git/branch-save-context.md b/external/ag-shared/prompts/commands/git/branch-save-context.md deleted file mode 100644 index b30f823f397..00000000000 --- a/external/ag-shared/prompts/commands/git/branch-save-context.md +++ /dev/null @@ -1,109 +0,0 @@ ---- -targets: ['*'] -description: 'Save/update branch-specific context to .context/ directory (branch memory)' ---- - -# Branch Save Context - -Save or update context for the current branch. Keep it concise - only preserve information useful for future sessions. - -## Usage - -`/branch-save-context` - -No arguments required - automatically detects current branch. - -## Workflow - -### STEP 1: Determine Context File Path - -```bash -# Get current branch name -BRANCH=$(git branch --show-current) - -# Get main repo root (works from worktrees) -MAIN_REPO=$(git rev-parse --path-format=absolute --git-common-dir | sed 's/\\.git$//') - -# Derive slug with hash suffix to avoid collisions (e.g. feature/foo vs feature-foo) -SLUG_BASE=$(echo "$BRANCH" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/--*/-/g' | sed 's/^-//' | sed 's/-$//') -HASH=$(echo -n "$BRANCH" | shasum | cut -c1-6) -SLUG="${SLUG_BASE}-${HASH}" - -# Context file path -CONTEXT_FILE="${MAIN_REPO}.context/${SLUG}.md" - -# Ensure directory exists -mkdir -p "${MAIN_REPO}.context" -``` - -### STEP 2: Check for Existing Context - -If the context file already exists, read its current contents. When updating, **prune resolved items and transient issues**. - -### STEP 3: Gather Context Information - -Ask the user what to save. Keep responses brief. - -**For new context**: What's the branch intent? Any patterns worth remembering? - -**For updates**: What changed? Any gaps resolved? New patterns discovered? - -### STEP 4: Write Context File - -Use this minimal template: - -```markdown ---- -branch: {branch-name} -updated: {ISO-date} ---- - -# {branch-name} - -## Intent - -{1-2 sentences: what this branch accomplishes} - -## Patterns - -{Only include if there are reusable code patterns} - -## Gaps - -{Only persistent gaps - architectural decisions, known limitations} - -## References - -{Links to ticket, relevant docs} -``` - -### STEP 5: Confirm Save - -``` -Saved: {context-file-path} -``` - -## What to Keep vs Prune - -### KEEP (future-relevant) - -- High-level intent (stable goal of the branch) -- Reusable patterns (code you'll copy again) -- Persistent gaps (architectural decisions pending, known limitations) -- Reference links (ticket, design docs) - -### PRUNE (transient) - -- Temporary test failures -- Build/environment issues -- Implementation gaps now resolved -- Debugging notes and scratch work -- Session-specific troubleshooting -- Resolved gaps (remove the checkbox, just delete) - -## Principles - -- **Brevity over completeness** - if in doubt, leave it out -- **Future self test** - will this help in a new session next week? -- **No resolved items** - once fixed, remove it entirely -- **Patterns must be reusable** - don't document one-off code diff --git a/external/ag-shared/prompts/commands/plan/review.md b/external/ag-shared/prompts/commands/plan/review.md index ccface46ca2..8bd1cf2ed67 100644 --- a/external/ag-shared/prompts/commands/plan/review.md +++ b/external/ag-shared/prompts/commands/plan/review.md @@ -5,7 +5,7 @@ description: 'Review plans for completeness, correctness, and verifiability' # Plan Review Prompt -You are a plan reviewer for AG Charts. Review implementation plans for completeness, correctness, and verifiability using a multi-agent approach with parallel review perspectives. +You are a plan reviewer. Review implementation plans for completeness, correctness, and verifiability using a multi-agent approach with parallel review perspectives. ## Input Requirements diff --git a/external/ag-shared/prompts/commands/pr/split.md b/external/ag-shared/prompts/commands/pr/split.md index 4fbe9a7c0e1..06ea0a325a2 100644 --- a/external/ag-shared/prompts/commands/pr/split.md +++ b/external/ag-shared/prompts/commands/pr/split.md @@ -42,8 +42,17 @@ Verify the working tree is clean, not on main branch, and gh CLI is authenticate Determine the commit message prefix from the branch name: -1. If branch matches `ag-NNNNN/description` pattern: extract JIRA ticket (e.g., `AG-12345`) -2. Otherwise: derive prefix from branch name (e.g., `feature/null-keys` becomes `null-keys`) +1. Detect the project from the repo name (enclosing directory): + + | Repo | Branch Pattern | JIRA Prefix | + |---------------|---------------------------|-------------| + | `ag-studio` | `st-NNNNN/description` | `ST-` | + | `ag-charts` | `ag-NNNNN/description` | `AG-` | + | `ag-grid` | `ag-NNNNN/description` | `AG-` | + | *(default)* | `ag-NNNNN/description` | `AG-` | + +2. If branch matches the project's pattern: extract JIRA ticket (e.g., `ST-12345`, `AG-12345`) +3. Otherwise: derive prefix from branch name (e.g., `feature/null-keys` becomes `null-keys`) Store this prefix for use in commit messages and PR titles. @@ -208,9 +217,8 @@ This phase is essential. Each PR must be polished until reviewer-ready, not just - Would the PR title make sense in a changelog? 5. **Run Build Validation** - - Type checking: `yarn nx build:types ` - - Linting: `yarn nx lint ` - - Tests: `yarn nx test ` + - Run the project's pre-commit validation commands against each affected package + - Ensure type checking, linting, and tests all pass before proceeding 6. **Fix Issues** - Code quality issues: fix and amend or add fixup commits diff --git a/external/ag-shared/prompts/commands/git/branch-load-context.md b/external/ag-shared/prompts/commands/recall.md similarity index 51% rename from external/ag-shared/prompts/commands/git/branch-load-context.md rename to external/ag-shared/prompts/commands/recall.md index 29766edfdca..8150e47f2c1 100644 --- a/external/ag-shared/prompts/commands/git/branch-load-context.md +++ b/external/ag-shared/prompts/commands/recall.md @@ -1,28 +1,24 @@ --- targets: ['*'] -description: 'Load branch-specific context from .context/ directory (branch memory)' +description: 'Load branch context and browse project memory for session resumption' --- -# Branch Load Context +# Recall -Load saved context for the current branch to resume work with full awareness of intent, patterns, and known gaps. +## Usage — `/recall` -## Usage +Load branch-scoped context from `.context/` and optionally browse project-scoped memory in `.rulesync/`. -`/branch-load-context` +## STEP 1: Load Branch Memory -No arguments required - automatically detects current branch. - -## Workflow - -### STEP 1: Determine Context File Path +### Determine Context File Path ```bash # Get current branch name BRANCH=$(git branch --show-current) # Get main repo root (works from worktrees) -MAIN_REPO=$(git rev-parse --path-format=absolute --git-common-dir | sed 's/\\.git$//') +MAIN_REPO=$(git rev-parse --path-format=absolute --git-common-dir | sed 's/\.git$//') # Derive slug with hash suffix to avoid collisions (e.g. feature/foo vs feature-foo) SLUG_BASE=$(echo "$BRANCH" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/--*/-/g' | sed 's/^-//' | sed 's/-$//') @@ -33,7 +29,7 @@ SLUG="${SLUG_BASE}-${HASH}" CONTEXT_FILE="${MAIN_REPO}.context/${SLUG}.md" ``` -### STEP 2: Load Context +### Load and Present Check if the context file exists: @@ -45,24 +41,7 @@ else fi ``` -### STEP 3: Present Context - -If the context file exists, read and present its contents to provide awareness of: - -- **Intent**: What this branch is trying to accomplish -- **Key Patterns**: Code snippets and approaches used in this branch -- **Known Gaps**: Outstanding issues or areas needing attention -- **References**: Relevant commands, docs, tickets, or websites - -If no context file exists, inform the user: - -> No saved context found for branch `{branch}`. -> -> Run `/branch-save-context` to create context for this branch. - -## Output Format - -When context is found, present a summary: +**If found**, read and present its contents: ```markdown ## Branch Context Loaded: {branch} @@ -80,8 +59,28 @@ When context is found, present a summary: {Full context file contents} ``` +**If not found**, inform the user: + +> No saved context found for branch `{branch}`. +> +> Run `/remember` and choose **Branch** to create context for this branch. + +## STEP 2: Project Memory Summary (optional) + +After presenting branch context, offer to show project memory: + +> Project rules and learnings in `.rulesync/rules/` load automatically based on file-pattern globs. +> Want to browse the project memory files? + +If the user says yes: + +1. List `.rulesync/rules/` files with a one-line description of each +2. User can request to read specific memory files for details +3. This is informational — project rules auto-load during normal work via globs + ## Notes - Context files are stored in the main repo root, shared across all worktrees - Context persists even when worktrees are deleted -- Files are gitignored - this is personal/local context, not shared +- Branch context files are gitignored — this is personal/local context, not shared +- Project memory (`.rulesync/rules/`) is committed and shared with the team diff --git a/external/ag-shared/prompts/commands/remember.md b/external/ag-shared/prompts/commands/remember.md new file mode 100644 index 00000000000..2c27e71abe4 --- /dev/null +++ b/external/ag-shared/prompts/commands/remember.md @@ -0,0 +1,216 @@ +--- +targets: ['*'] +description: 'Save branch context or project learnings as agentic memory' +--- + +# Remember + +## Usage — `/remember` + +Save branch-scoped context (`.context/`) or project-scoped learnings (`.rulesync/`). + +## STEP 1: Choose Memory Type + +Ask the user: + +**What would you like to remember?** + +1. **Branch** — save context for this branch (intent, patterns, gaps) +2. **Project** — extract learnings from this conversation into rules/skills +3. **Both** — do branch first, then project + +Use AskUserQuestion. Then follow the corresponding path(s) below. + +--- + +## Branch Memory Path + +Save or update context for the current branch. Keep it concise — only preserve information useful for future sessions. + +### STEP B1: Determine Context File Path + +```bash +# Get current branch name +BRANCH=$(git branch --show-current) + +# Get main repo root (works from worktrees) +MAIN_REPO=$(git rev-parse --path-format=absolute --git-common-dir | sed 's/\.git$//') + +# Derive slug with hash suffix to avoid collisions (e.g. feature/foo vs feature-foo) +SLUG_BASE=$(echo "$BRANCH" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/--*/-/g' | sed 's/^-//' | sed 's/-$//') +HASH=$(echo -n "$BRANCH" | shasum | cut -c1-6) +SLUG="${SLUG_BASE}-${HASH}" + +# Context file path +CONTEXT_FILE="${MAIN_REPO}.context/${SLUG}.md" + +# Ensure directory exists +mkdir -p "${MAIN_REPO}.context" +``` + +### STEP B2: Check for Existing Context + +If the context file already exists, read its current contents. When updating, **prune resolved items and transient issues**. + +### STEP B3: Gather Context Information + +Ask the user what to save. Keep responses brief. + +**For new context**: What's the branch intent? Any patterns worth remembering? + +**For updates**: What changed? Any gaps resolved? New patterns discovered? + +### STEP B4: Write Context File + +Use this minimal template: + +```markdown +--- +branch: {branch-name} +updated: {ISO-date} +--- + +# {branch-name} + +## Intent + +{1-2 sentences: what this branch accomplishes} + +## Patterns + +{Only include if there are reusable code patterns} + +## Gaps + +{Only persistent gaps — architectural decisions, known limitations} + +## References + +{Links to ticket, relevant docs} +``` + +### STEP B5: Confirm Save + +``` +Saved: {context-file-path} +``` + +### What to Keep vs Prune + +**KEEP** (future-relevant): + +- High-level intent (stable goal of the branch) +- Reusable patterns (code you'll copy again) +- Persistent gaps (architectural decisions pending, known limitations) +- Reference links (ticket, design docs) + +**PRUNE** (transient): + +- Temporary test failures +- Build/environment issues +- Implementation gaps now resolved +- Debugging notes and scratch work +- Session-specific troubleshooting +- Resolved gaps (remove the checkbox, just delete) + +### Principles + +- **Brevity over completeness** — if in doubt, leave it out +- **Future self test** — will this help in a new session next week? +- **No resolved items** — once fixed, remove it entirely +- **Patterns must be reusable** — don't document one-off code + +--- + +## Project Memory Path + +Extract decisions, patterns, and learnings from the current conversation and persist them as agentic memory. + +### When to Use + +- After resolving a non-obvious issue with a specific approach +- When discovering a pattern that should be reused +- When user corrects agent behaviour or preferences +- After clarifying how existing rules should be interpreted + +### STEP P1: Memory Extraction + +Review the conversation to identify: + +1. **Decisions** — Specific choices made (e.g., "use X approach instead of Y") +2. **Corrections** — Mistakes caught and how to avoid them +3. **Patterns** — Reusable approaches that worked well +4. **Preferences** — User/project preferences revealed +5. **Clarifications** — Ambiguous rules made concrete + +For each candidate, extract: +- The core learning (1-2 sentences) +- Context where it applies +- Why it matters + +### STEP P2: Classification + +Determine the best location for each memory: + +| Type | Location | When | +|------|----------|------| +| Domain rule | `.rulesync/rules/{domain}.md` | Topic-specific guidance | +| Command enhancement | `.rulesync/commands/{cmd}.md` | Workflow-specific | +| Skill update | `external/prompts/skills/{skill}/` | Skill-scoped learning | +| New rule file | `.rulesync/rules/{new}.md` | Distinct topic, 3+ guidelines | + +**Constraints**: +- **Never update root files directly** — `CLAUDE.md`, `AGENTS.md`, and files with `root: true` frontmatter are managed separately. If a memory belongs there, recommend creating/updating a non-root rule that gets referenced instead. +- **Prefer existing files** — only create new files when the topic is clearly distinct and has sufficient content. + +### STEP P3: Interactive Presentation + +For each memory candidate, present to user: + +#### Memory N of M + +**Learning**: [The extracted insight] + +**Recommended location**: `path/to/file.md` → Section Name + +**Options**: +1. Add to recommended location +2. Add to different location (specify) +3. Rephrase the learning +4. Skip this memory + +Use AskUserQuestion with these options. Wait for user response before proceeding. + +### STEP P4: Execution + +For approved memories: + +1. **Read** the target file to understand current structure +2. **Locate** the appropriate section (or create if needed) +3. **Format** the memory to match file conventions: + - Rules: Use `-` bullet points, match existing tone + - Commands: Integrate into relevant phase/section +4. **Write** the update using Edit tool +5. **Confirm** the change to user + +### STEP P5: Output + +After processing all memories, summarise: + +``` +## Memory Update Summary + +Added: N memories +Skipped: M memories +Files modified: +- path/to/file1.md (section updated) +- path/to/file2.md (new section added) +``` + +### Project Memory Constraints + +- **Never update root files** — Do not modify `CLAUDE.md`, `AGENTS.md`, or any file with `root: true` in frontmatter. These are managed separately. Instead, create or update a non-root rule file that can be referenced. +- Keep memories atomic — one concept per update +- Match the writing style of the target file +- If unsure about location, ask user rather than guess +- Memories should be actionable, not just observations diff --git a/external/ag-shared/prompts/guides/code-quality.md b/external/ag-shared/prompts/guides/code-quality.md index 9839bc1fdb1..c414c895f16 100644 --- a/external/ag-shared/prompts/guides/code-quality.md +++ b/external/ag-shared/prompts/guides/code-quality.md @@ -44,13 +44,3 @@ This guide covers code quality practices, including avoiding code bloat, comment - For test changes, verify completeness by comparing with related tests in the same file - Ensure naming clearly conveys intent (especially for boolean/flag variables) -## Essential Quality Commands - -- `yarn nx format` – format repo files; run from the project root before committing -- `yarn nx build:types ` – regenerate declaration files when touching exported APIs -- `yarn nx lint ` – apply ESLint and custom rules before final review - -## Formatting Best Practice - -- Make sure to run `yarn nx format` on any changes to ensure consistent formatting before commit -- Prefer running `yarn nx format` in the root of the repo to format changes, as there are config nuances that aren't taken into account when directly running tooling in more specific places diff --git a/external/ag-shared/prompts/skills/dev-server/_dev-server-core.md b/external/ag-shared/prompts/skills/dev-server/_dev-server-core.md new file mode 100644 index 00000000000..bbe974b52d4 --- /dev/null +++ b/external/ag-shared/prompts/skills/dev-server/_dev-server-core.md @@ -0,0 +1,25 @@ +## Build Watch Status Monitoring + +The `yarn nx dev` watch script (`external/ag-shared/scripts/watch/watch.js`) maintains a status file at `node_modules/.cache/ag-watch-status.json` for monitoring build state. + +**Check this file to**: + +- Ensure no builds are in progress before starting operations (status != `BUILDING`) +- Monitor build health via `recentBuilds` array and `targetHistory` stats +- Track build progress after file changes + +**Key fields**: + +- `status`: `STARTING` | `RUNNING` | `BUILDING` | `IDLE` | `STOPPED` +- `currentBuild`: Active build details (only when `BUILDING`) +- `recentBuilds`: Last 10 builds with status/duration/errors +- `targetHistory`: Per-target success/failure counts + +**Usage**: + +```bash +# Wait for idle before operations +while [ "$(jq -r '.status' node_modules/.cache/ag-watch-status.json 2>/dev/null)" = "BUILDING" ]; do + sleep 2 +done +``` diff --git a/external/ag-shared/scripts/install-for-cloud/install-for-cloud.sh b/external/ag-shared/scripts/install-for-cloud/install-for-cloud.sh index 5bfb0213640..765d124097e 100755 --- a/external/ag-shared/scripts/install-for-cloud/install-for-cloud.sh +++ b/external/ag-shared/scripts/install-for-cloud/install-for-cloud.sh @@ -162,7 +162,7 @@ symlink_nx_cache() { # Try to symlink node_modules from root worktree if lockfiles match # Returns 0 if symlink was created, 1 if fallback to install needed try_symlink_node_modules() { - if ! is_claude_worktree; then + if ! is_claude_worktree && [[ -z "${ROOT_WORKTREE_PATH:-}" ]]; then return 1 fi @@ -224,12 +224,16 @@ main() { # Full mode: install dependencies, yarn, nx, etc. if [ -d node_modules ]; then log_info "node_modules directory exists, checking dependencies" + symlink_nx_cache if ! install_dependencies; then exit 2 fi elif try_symlink_node_modules; then # Successfully symlinked node_modules from root worktree log_info "Using symlinked node_modules, skipping install" + symlink_nx_cache + log_info "Running setup-prompts for worktree" + yarn postinstall:setup-prompts || log_error "setup-prompts failed, continuing anyway" else log_info "node_modules directory not found, performing fresh install" if ! install_yarn; then @@ -238,7 +242,7 @@ main() { if ! install_nx; then exit 2 fi - + symlink_nx_cache if ! install_dependencies; then exit 2 fi @@ -248,10 +252,6 @@ main() { exit 2 fi - if ! symlink_nx_cache; then - exit 2 - fi - # Verify nx is available after installation if command -v nx &> /dev/null; then log_info "Installation completed successfully - nx is available" diff --git a/external/ag-shared/scripts/sonar/sync-sonar-issues.ts b/external/ag-shared/scripts/sonar/sync-sonar-issues.ts new file mode 100644 index 00000000000..6d2b62d89bd --- /dev/null +++ b/external/ag-shared/scripts/sonar/sync-sonar-issues.ts @@ -0,0 +1,388 @@ +#!/usr/bin/env tsx + +/** + * Script to sync accepted/false-positive issues from one SonarCloud project to another. + * + * This is useful for syncing exceptions from the development project (ag-charts-community-latest) + * to the release project (ag-charts-community) during the release process. + * + * Usage: + * SONAR_TOKEN= npx tsx external/ag-shared/scripts/sonar/sync-sonar-issues.ts + * + * # Dry run (preview only): + * SONAR_TOKEN= npx tsx external/ag-shared/scripts/sonar/sync-sonar-issues.ts --dry-run + * + * # Custom projects: + * SONAR_TOKEN= npx tsx external/ag-shared/scripts/sonar/sync-sonar-issues.ts \ + * --source ag-charts-community-latest \ + * --target ag-charts-community + * + * Prerequisites: + * 1. Generate a SonarCloud token at: https://sonarcloud.io/account/security + * 2. Ensure you have "Administer Issues" permission on BOTH projects + */ + +const SONAR_TOKEN = process.env.SONAR_TOKEN; +const SONAR_BASE_URL = 'https://sonarcloud.io/api'; +const DEFAULT_SOURCE_PROJECT = 'ag-charts-community-latest'; +const DEFAULT_TARGET_PROJECT = 'ag-charts-community'; +const PAGE_SIZE = 500; +const RATE_LIMIT_DELAY_MS = 100; + +if (!SONAR_TOKEN) { + console.error('Error: SONAR_TOKEN environment variable is required'); + console.error('Generate a token at: https://sonarcloud.io/account/security'); + process.exit(1); +} + +interface Issue { + key: string; + rule: string; + component: string; + line?: number; + message: string; + status: string; + resolution?: string; + comments?: Array<{ markdown: string; createdAt: string }>; +} + +interface IssueSearchResponse { + total: number; + p: number; + ps: number; + issues: Issue[]; +} + +interface SyncConfig { + sourceProject: string; + targetProject: string; + dryRun: boolean; + verbose: boolean; +} + +interface MatchResult { + sourceIssue: Issue; + targetIssue: Issue | null; + signature: string; +} + +/** + * Parse command line arguments + */ +function parseArgs(): SyncConfig { + const args = process.argv.slice(2); + const config: SyncConfig = { + sourceProject: DEFAULT_SOURCE_PROJECT, + targetProject: DEFAULT_TARGET_PROJECT, + dryRun: false, + verbose: false, + }; + + for (let i = 0; i < args.length; i++) { + const arg = args[i]; + if (arg === '--source' && args[i + 1]) { + config.sourceProject = args[++i]; + } else if (arg === '--target' && args[i + 1]) { + config.targetProject = args[++i]; + } else if (arg === '--dry-run') { + config.dryRun = true; + } else if (arg === '--verbose') { + config.verbose = true; + } else if (arg === '--help') { + console.log(` +Usage: npx tsx sync-sonar-issues.ts [options] + +Options: + --source Source project key (default: ${DEFAULT_SOURCE_PROJECT}) + --target Target project key (default: ${DEFAULT_TARGET_PROJECT}) + --dry-run Preview matches without applying changes + --verbose Show detailed matching information + --help Show this help message + +Environment: + SONAR_TOKEN Required. SonarCloud API token with "Administer Issues" permission +`); + process.exit(0); + } + } + + return config; +} + +/** + * Normalize component path by removing the project key prefix + */ +function normalizeComponent(component: string, projectKey: string): string { + return component.replace(`${projectKey}:`, ''); +} + +/** + * Create a unique signature for issue matching + */ +function createIssueSignature(issue: Issue, projectKey: string): string { + return JSON.stringify({ + rule: issue.rule, + file: normalizeComponent(issue.component, projectKey), + line: issue.line ?? 0, + message: issue.message, + }); +} + +/** + * Fetch all issues from a project with pagination + */ +async function fetchAllIssues(projectKey: string, statuses: string[]): Promise { + const allIssues: Issue[] = []; + let page = 1; + let total = 0; + + do { + const statusParam = statuses.join(','); + const url = `${SONAR_BASE_URL}/issues/search?componentKeys=${projectKey}&issueStatuses=${statusParam}&ps=${PAGE_SIZE}&p=${page}`; + + const response = await fetch(url, { + headers: { + Authorization: `Bearer ${SONAR_TOKEN}`, + }, + }); + + if (!response.ok) { + throw new Error(`Failed to fetch issues: ${response.status} ${response.statusText}`); + } + + const data: IssueSearchResponse = await response.json(); + allIssues.push(...data.issues); + total = data.total; + page++; + + // Rate limiting + await new Promise((resolve) => setTimeout(resolve, RATE_LIMIT_DELAY_MS)); + } while (allIssues.length < total); + + return allIssues; +} + +/** + * Determine the transition type based on the issue's current status/resolution + */ +function getTransitionType(issue: Issue): 'falsepositive' | 'accept' { + if (issue.resolution === 'FALSE-POSITIVE') { + return 'falsepositive'; + } + return 'accept'; +} + +/** + * Add a comment to an issue + */ +async function addComment(issueKey: string, comment: string): Promise { + const url = `${SONAR_BASE_URL}/issues/add_comment`; + + const response = await fetch(url, { + method: 'POST', + headers: { + Authorization: `Bearer ${SONAR_TOKEN}`, + 'Content-Type': 'application/x-www-form-urlencoded', + }, + body: new URLSearchParams({ + issue: issueKey, + text: comment, + }), + }); + + if (!response.ok) { + const text = await response.text(); + throw new Error(`Failed to add comment to ${issueKey}: ${response.status} ${text}`); + } +} + +/** + * Transition an issue to a new status + */ +async function transitionIssue(issueKey: string, transition: string): Promise { + const url = `${SONAR_BASE_URL}/issues/do_transition`; + + const response = await fetch(url, { + method: 'POST', + headers: { + Authorization: `Bearer ${SONAR_TOKEN}`, + 'Content-Type': 'application/x-www-form-urlencoded', + }, + body: new URLSearchParams({ + issue: issueKey, + transition: transition, + }), + }); + + if (!response.ok) { + const text = await response.text(); + throw new Error(`Failed to transition issue ${issueKey}: ${response.status} ${text}`); + } +} + +/** + * Match source issues to target issues + */ +function matchIssues(sourceIssues: Issue[], targetIssues: Issue[], config: SyncConfig): MatchResult[] { + // Build a map of target issues by signature for O(1) lookup + const targetMap = new Map(); + for (const issue of targetIssues) { + const signature = createIssueSignature(issue, config.targetProject); + targetMap.set(signature, issue); + } + + // Match source issues to target issues + const results: MatchResult[] = []; + for (const sourceIssue of sourceIssues) { + const signature = createIssueSignature(sourceIssue, config.sourceProject); + const targetIssue = targetMap.get(signature) ?? null; + results.push({ sourceIssue, targetIssue, signature }); + } + + return results; +} + +/** + * Format a file path for display + */ +function formatFilePath(component: string, projectKey: string): string { + const path = normalizeComponent(component, projectKey); + // Truncate long paths + if (path.length > 60) { + return '...' + path.slice(-57); + } + return path; +} + +/** + * Main execution + */ +async function main() { + const config = parseArgs(); + + console.log('SonarCloud Issue Sync'); + console.log('====================='); + console.log(`Source: ${config.sourceProject}`); + console.log(`Target: ${config.targetProject}`); + if (config.dryRun) { + console.log('Mode: DRY RUN (no changes will be made)'); + } + console.log(''); + + // Fetch accepted/false-positive issues from source + console.log('Fetching accepted issues from source...'); + const sourceIssues = await fetchAllIssues(config.sourceProject, ['ACCEPTED', 'FALSE_POSITIVE']); + console.log(` Found ${sourceIssues.length} accepted/false-positive issues`); + + if (sourceIssues.length === 0) { + console.log('\nNo issues to sync. Done!'); + return; + } + + // Fetch open issues from target + console.log('Fetching open issues from target...'); + const targetIssues = await fetchAllIssues(config.targetProject, ['OPEN']); + console.log(` Found ${targetIssues.length} open issues`); + + // Match issues + console.log('\nMatching issues...'); + const matches = matchIssues(sourceIssues, targetIssues, config); + + const matched = matches.filter((m) => m.targetIssue !== null); + const unmatched = matches.filter((m) => m.targetIssue === null); + + console.log(` Matched: ${matched.length}/${sourceIssues.length}`); + console.log(` Unmatched: ${unmatched.length}`); + + // Show verbose matching info + if (config.verbose) { + console.log('\n--- Matched Issues ---'); + for (const { sourceIssue } of matched) { + const path = formatFilePath(sourceIssue.component, config.sourceProject); + const transition = getTransitionType(sourceIssue); + console.log(` āœ“ ${sourceIssue.rule} at ${path}:${sourceIssue.line ?? '?'} -> ${transition}`); + } + + if (unmatched.length > 0) { + console.log('\n--- Unmatched Issues (no corresponding open issue in target) ---'); + for (const { sourceIssue } of unmatched) { + const path = formatFilePath(sourceIssue.component, config.sourceProject); + console.log(` āœ— ${sourceIssue.rule} at ${path}:${sourceIssue.line ?? '?'}`); + } + } + } + + if (matched.length === 0) { + console.log('\nNo matching issues to sync. Done!'); + return; + } + + // Apply transitions + console.log('\nApplying transitions...'); + + if (config.dryRun) { + console.log(' [DRY RUN] Would mark the following issues:'); + for (const { sourceIssue, targetIssue } of matched) { + const transition = getTransitionType(sourceIssue); + const path = formatFilePath(targetIssue!.component, config.targetProject); + console.log(` - ${targetIssue!.key}: ${transition} (${path}:${targetIssue!.line ?? '?'})`); + } + console.log(`\n [DRY RUN] Would sync ${matched.length} issues`); + } else { + let successCount = 0; + let failCount = 0; + + for (const { sourceIssue, targetIssue } of matched) { + const transition = getTransitionType(sourceIssue); + const path = formatFilePath(targetIssue!.component, config.targetProject); + + try { + // Add sync comment + const comment = `Synced from ${config.sourceProject}: ${transition.toUpperCase()}`; + await addComment(targetIssue!.key, comment); + + // Apply transition + await transitionIssue(targetIssue!.key, transition); + + successCount++; + console.log(` āœ“ ${targetIssue!.key} -> ${transition} (${path})`); + + // Rate limiting + await new Promise((resolve) => setTimeout(resolve, RATE_LIMIT_DELAY_MS)); + } catch (error: unknown) { + failCount++; + const message = error instanceof Error ? error.message : String(error); + console.error(` āœ— ${targetIssue!.key}: ${message}`); + } + } + + console.log(`\nResults: ${successCount} succeeded, ${failCount} failed`); + } + + // Summary by rule + console.log('\n--- Summary by Rule ---'); + const ruleStats = new Map(); + for (const { sourceIssue, targetIssue } of matches) { + const stats = ruleStats.get(sourceIssue.rule) ?? { matched: 0, unmatched: 0 }; + if (targetIssue) { + stats.matched++; + } else { + stats.unmatched++; + } + ruleStats.set(sourceIssue.rule, stats); + } + + const sortedRules = [...ruleStats.entries()].sort((a, b) => b[1].matched - a[1].matched); + for (const [rule, stats] of sortedRules) { + const unmatchedNote = stats.unmatched > 0 ? ` (${stats.unmatched} unmatched)` : ''; + console.log(` ${rule}: ${stats.matched} synced${unmatchedNote}`); + } + + console.log('\nāœ“ Complete'); +} + +// Run the script +main().catch((error) => { + console.error('Fatal error:', error); + process.exit(1); +}); diff --git a/external/ag-shared/scripts/watch/dashWatch.config.js b/external/ag-shared/scripts/watch/studioWatch.config.js similarity index 64% rename from external/ag-shared/scripts/watch/dashWatch.config.js rename to external/ag-shared/scripts/watch/studioWatch.config.js index 01661b163ea..e1f5873f5c2 100644 --- a/external/ag-shared/scripts/watch/dashWatch.config.js +++ b/external/ag-shared/scripts/watch/studioWatch.config.js @@ -1,6 +1,6 @@ const BASE_IGNORED_PROJECTS = ['all']; -const PACKAGE_PROJECTS = ['ag-dash']; -const EXAMPLE_GENERATOR_PROJECTS = ['ag-dash-generate-example-files']; +const PACKAGE_PROJECTS = ['ag-studio']; +const EXAMPLE_GENERATOR_PROJECTS = ['ag-studio-generate-example-files']; function getIgnoredProjects() { const ignoredProjects = [...BASE_IGNORED_PROJECTS]; @@ -11,16 +11,15 @@ function getIgnoredProjects() { function getProjectBuildTargets(project) { const buildTargets = []; - if (project.startsWith('ag-dash-docs')) { + if (project.startsWith('ag-studio-docs')) { buildTargets.push([project, ['generate'], 'watch']); } else if (EXAMPLE_GENERATOR_PROJECTS.includes(project)) { - buildTargets.push(['ag-dash-docs', ['generate-examples']]); + buildTargets.push(['ag-studio-docs', ['generate-examples']]); } else if (PACKAGE_PROJECTS.includes(project)) { - // TODO add this back in once this target exists - // buildTargets.push(['ag-dash-docs', ['generate-doc-references']]); + buildTargets.push(['ag-studio-docs', ['generate-doc-references']]); - if (project === 'ag-dash') { - buildTargets.push(['ag-dash', ['build'], 'watch']); + if (project === 'ag-studio') { + buildTargets.push(['ag-studio', ['build'], 'watch']); } } @@ -28,8 +27,8 @@ function getProjectBuildTargets(project) { } const externalBuildTriggers = [ - { file: '../ag-charts/node_modules/.cache/ag-build-queue.empty', projects: ['ag-dash'] }, - { file: '../ag-grid/node_modules/.cache/ag-build-queue.empty', projects: ['ag-dash'] }, + { file: '../ag-charts/node_modules/.cache/ag-build-queue.empty', projects: ['ag-studio'] }, + { file: '../ag-grid/node_modules/.cache/ag-build-queue.empty', projects: ['ag-studio'] }, ]; module.exports = { diff --git a/external/ag-shared/scripts/watch/watch.js b/external/ag-shared/scripts/watch/watch.js index ccd83f049ec..f8c99f73f33 100644 --- a/external/ag-shared/scripts/watch/watch.js +++ b/external/ag-shared/scripts/watch/watch.js @@ -8,7 +8,7 @@ * batched up into one update event. Watching `BUILD_QUEUE_EMPTY_FILE` in another process * can be used to trigger further updates eg, website refresh. * - * Usage: node ./watch [charts|grid] + * Usage: node ./watch [charts|grid|studio] */ const { spawn, spawnSync } = require('child_process'); const fsp = require('node:fs/promises'); @@ -25,7 +25,7 @@ const { } = require('./constants'); const chartsConfig = require('./chartsWatch.config'); const gridConfig = require('./gridWatch.config'); -const dashConfig = require('./dashWatch.config'); +const studioConfig = require('./studioWatch.config'); const RED = '\x1b[;31m'; const GREEN = '\x1b[;32m'; @@ -563,7 +563,7 @@ process.on('beforeExit', async () => { const LIBRARY_CONFIGS = { charts: chartsConfig, grid: gridConfig, - dash: dashConfig, + studio: studioConfig, }; const LIBRARY_KEYS = Object.keys(LIBRARY_CONFIGS); From 2c3580ddb4794276bb20839488b7dd3714f588b4 Mon Sep 17 00:00:00 2001 From: Alan Treadway Date: Fri, 20 Feb 2026 13:12:36 +0000 Subject: [PATCH 02/17] git subrepo pull external/ag-shared subrepo: subdir: "external/ag-shared" merged: "4389a295dbc" upstream: origin: "https://github.com/ag-grid/ag-shared.git" branch: "latest" commit: "4389a295dbc" git-subrepo: version: "0.4.9" origin: "https://github.com/Homebrew/brew" commit: "6e88ffd4d2" --- external/ag-shared/.gitrepo | 4 +- .../commands/docs/_docs-review-core.md | 384 +++++++ .../docs/_release-docs-review-core.md | 941 ++++++++++++++++++ .../prompts/guides/docs-review-integration.md | 238 +++++ .../prompts/guides/nx-conventions.md | 12 + .../ag-shared/prompts/guides/setup-prompts.md | 12 + .../prompts/guides/website-astro-pages.md | 334 +++++++ .../prompts/guides/website-browser-testing.md | 89 ++ .../ag-shared/prompts/guides/website-css.md | 490 +++++++++ .../install-for-cloud/install-for-cloud.sh | 38 +- .../scripts/setup-prompts/verify-rulesync.sh | 66 +- 11 files changed, 2575 insertions(+), 33 deletions(-) create mode 100644 external/ag-shared/prompts/commands/docs/_docs-review-core.md create mode 100644 external/ag-shared/prompts/commands/docs/_release-docs-review-core.md create mode 100644 external/ag-shared/prompts/guides/docs-review-integration.md create mode 100644 external/ag-shared/prompts/guides/nx-conventions.md create mode 100644 external/ag-shared/prompts/guides/website-astro-pages.md create mode 100644 external/ag-shared/prompts/guides/website-browser-testing.md create mode 100644 external/ag-shared/prompts/guides/website-css.md diff --git a/external/ag-shared/.gitrepo b/external/ag-shared/.gitrepo index a975d199774..1fbeb45b691 100644 --- a/external/ag-shared/.gitrepo +++ b/external/ag-shared/.gitrepo @@ -6,7 +6,7 @@ [subrepo] remote = https://github.com/ag-grid/ag-shared.git branch = latest - commit = 032a0b07dce704562581073e69edffbe6319c956 - parent = 74a2de9a165e71ec9ef943721a1d8236a7faeadc + commit = 4389a295dbca4303ed7b6e546860dac255d189e2 + parent = 39313b967fddf83448a6e65fd61207ba2d8a2a32 method = rebase cmdver = 0.4.9 diff --git a/external/ag-shared/prompts/commands/docs/_docs-review-core.md b/external/ag-shared/prompts/commands/docs/_docs-review-core.md new file mode 100644 index 00000000000..3d0e810396e --- /dev/null +++ b/external/ag-shared/prompts/commands/docs/_docs-review-core.md @@ -0,0 +1,384 @@ +# Documentation Review Core Methodology + +This file contains the shared documentation review methodology. It is included by product-specific wrapper prompts that provide configuration (paths, conventions, resolution rules). + +## Required Product Configuration + +The product wrapper that references this file MUST define these sections (referenced by exact name below): + +- **Input Requirements** — documentation page path pattern, dev URL +- **File Resolution Rules** — table mapping documentation references to TypeScript definition files +- **Implementation Resolution Rules** — table mapping features to source implementation files +- **Example Path Pattern** — directory layout for examples +- **Exceptions File Path** — location of the per-page exceptions file +- **Output Paths** — where review plans, reports, and summaries are written +- **Default Value Verification Hierarchy** — how runtime defaults are resolved in the product +- **Product-Specific Conventions** — accepted patterns, known pitfalls, enablement conventions + +Optional sections: + +- **Orchestration Indicator** — if the product has a batch orchestration script, name it here so Strict Mode can be detected + +## Execution Mode Detection + +Check for any of these indicators: + +- "EXECUTION CONTEXT: ORCHESTRATED" in the prompt +- Session ID provided +- Invoked via the orchestrator script named in **Orchestration Indicator** (if provided) + +**If found**: **STRICT MODE** - All MCP tools REQUIRED. If ANY tool is missing, STOP immediately with error message: `ERROR: Cannot proceed in orchestrated mode - missing required MCP tool [name]`. Do NOT attempt review or fallbacks. + +**If not found**: **ADAPTIVE MODE** - Check available tools. If claude-in-chrome or Task tool unavailable, display degraded mode warning and request explicit user confirmation before proceeding. + +### Degraded Mode Warning Template + +``` +[WARNING] DEGRADED MODE DETECTED + +Missing capabilities: +- Browser automation (claude-in-chrome) +- Example testing delegation (Task tool) + +Limitations: +- Static code analysis only for examples +- No automated screenshots +- No runtime behavior validation +- No interactive testing + +Still included: +- Full API and TypeScript validation +- Configuration consistency checking +- Static example code analysis +- Documentation accuracy assessment + +Continue in degraded mode? (Please confirm explicitly) +``` + +## Mode Adaptations Reference + +| Capability | STRICT/Full Mode | ADAPTIVE/Degraded Mode | +| --------------------- | ----------------------------------------------- | -------------------------------------------- | +| **Tool Requirements** | claude-in-chrome + Task tool (REQUIRED) | Read/Write only (optional tools unavailable) | +| **Example Testing** | Delegate to example-tester agent via Task tool | Static code analysis of example files | +| **Visual Testing** | Full screenshot capture + interaction testing | Skip with warning marker | +| **Report Markers** | `[PASSED]`, `[WARNING]`, `[CRITICAL]` | Prefix with "STATIC ANALYSIS ONLY" | +| **Visual Evidence** | Reference screenshots by filename | Note "[SKIPPED] VISUAL TESTING" | +| **Interaction Tests** | Test all interactive features described in docs | Mark "Unable to verify (requires browser)" | + +Apply these adaptations throughout all phases based on detected mode. + +## Three-Phase Review Process + +### Phase 1: Create Review Plan + +**Goal**: Analyse documentation and create structured validation plan using mechanical file resolution. + +#### Step 1: Discover Files to Review + +**A. Read documentation page**: Use the path pattern from **Input Requirements** in the product configuration. + +**B. Extract API surface systematically**: + +1. **Scan for code blocks** containing configuration objects to extract property names +2. **Extract interface references** from docs (look for product-specific interface naming patterns) +3. **List all example references** (pattern: ` **Default Value Verification**: When checking defaults, always verify against the hierarchy described in the **Default Value Verification Hierarchy** in the product configuration. Theme/module defaults override decorator defaults and represent actual runtime behaviour. + + - Verify APIs against TypeScript definitions using paths from **File Resolution Rules** + - Check implementations using paths from **Implementation Resolution Rules** + - Validate default values using the hierarchy from product configuration + - Verify code snippets work correctly + - Document findings with: + - Status indicators: `[CRITICAL]`, `[WARNING]`, or `[PASSED]` + - Specific file:line locations + - Code examples showing incorrect vs correct + - For defaults: Show all layers of the verification hierarchy + +3. **Example Testing** (mode-dependent, see Mode Adaptations table): + + **Full Mode**: Delegate to example-tester agent via Task tool with: + + - Example path and expected behaviours from documentation + - Specific features that should be visible/testable + - Configuration patterns mentioned in docs + - Structure agent findings by example as returned + + > **Note**: Full Mode example testing requires a product-specific `example-tester` sub-agent. If the product has not configured one, fall back to Degraded Mode for example testing. + + **Degraded Mode**: For each example, perform static analysis: + + - Read source files from the **Example Path Pattern** + - Extract documentation claims about the example + - Validate: configuration consistency, API usage, property validation, data compatibility, best practices + - Report format: + + ``` + #### [Example Name] - STATIC ANALYSIS ONLY + **Location**: `_examples/[example-name]/` + + [PASSED] **Configuration Verified**: [list validated configurations] + + [CRITICAL] **Configuration Issues**: + - **Issue**: [Specific mismatch] + - **Documentation claims**: [What docs say] + - **Actual code**: [What's in example] + - **Fix Required**: [Specific action] + + [WARNING] **Unable to Verify (requires browser)**: Runtime behavior, Visual rendering, Interactive features, Tooltip content + ``` + +4. **Visual & Interaction Testing** (mode-dependent, see Mode Adaptations table): + + **Full Mode**: Perform screenshot capture and interaction testing. Navigate to dev URL (from **Input Requirements**), test interactive features, save screenshots to designated directories. + + **Degraded Mode**: Add section noting: + + ``` + ### Visual & Interaction Testing + [SKIPPED] - claude-in-chrome unavailable + + Could not verify: Screenshot capture, Runtime rendering, Interactive features, Tooltip behavior, Responsive layout + + Manual verification recommended for critical visual features. + ``` + +5. **Content Quality** (always performed): + - Completeness of feature coverage + - Accuracy against code analysis (static or runtime based on mode) + - Missing documentation for discovered features + +**Output**: Write to the reports path specified in **Output Paths** (see Report Structure below). + +### Phase 3: Generate Summary + +**Goal**: Aggregate findings from all reviewed pages. + +Process page reports in batches to avoid context limits: + +1. Process ~10 pages per batch → temporary `batch-summary-{n}.json` +2. Aggregate batch summaries → final report +3. Identify patterns and prioritise recommendations + +**Output**: Write to the summary path specified in **Output Paths**. + +## Report Structure Requirements (Phase 2) + +All reports must include these sections in order. Use the specified structure and include all required elements. + +### 1. Executive Summary + +Required elements: + +- Brief assessment of page +- Review mode indicator (Full MCP / Degraded) +- Overall status: `[CRITICAL ISSUES]`, `[ISSUES FOUND]`, or `[ALL PASSED]` +- Issue counts by category: Technical Accuracy, Example Consistency (note if static only), Visual/Interaction (or SKIPPED), Content Quality + +### 2. Review Limitations (if in degraded mode) + +List what was skipped (browser testing, screenshots, interactions) and what was completed (static analysis, configuration verification, API validation). + +### 3. Known Exceptions + +List exceptions from the exceptions file (see **Exceptions File Path** in product configuration) if any exist, or note if no exceptions file found. + +### 4. Technical Accuracy Issues + +Structure each finding as: + +- Status indicator: `[PASSED]`, `[WARNING]`, or `[CRITICAL]` +- Specific file:line reference +- Code comparison (incorrect → correct) +- Implementation file reference + +Example: + +``` +[CRITICAL] **Default Value Mismatch** at `index.mdoc:45` +- Docs claim: `spacing: 10` (default) +- Decorator default: `spacing: number = 1` (Properties.ts:95) +- Theme/module default: `spacing: 20` (Module.ts:51) <-- Actual runtime default +- TypeScript comment: `spacing: 10` (Options.ts:74) <-- Stale +- Fix: Update documentation and TypeScript comment to reflect runtime default of 20 +``` + +### 5. Example Consistency Issues + +Structure findings by example with clear headers. Include appropriate status labels (CRITICAL FAILURE, DOCUMENTATION MISMATCH, etc.). Provide specific fix instructions. + +- **Full mode**: Include example-tester agent findings verbatim +- **Degraded mode**: Include static analysis findings with "STATIC ANALYSIS ONLY" labels + +### 6. Visual and Interaction Testing Results + +- **Full mode**: Reference specific screenshots as evidence (e.g., "See `reports/screenshots/tooltip-hover.png`") +- **Degraded mode**: Note "[SKIPPED] VISUAL TESTING - claude-in-chrome unavailable" +- List any console errors found through static analysis + +### 7. Content Quality Issues + +Document: + +- Missing property documentation +- Incomplete feature coverage +- Unclear explanations +- Gaps between implementation and documentation + +### 8. Recommendations + +Organise by priority with specific fix instructions: + +``` +### High Priority (Critical Fixes Required) +1. **[Specific Issue]**: + - [Specific fix instruction] + - Update file: `[exact file path]` at line [X] + +### Medium Priority +[List medium priority fixes] + +### Low Priority +[List low priority improvements] +``` + +### 9. Summary + +Overall assessment including: + +- Files requiring updates (with full paths) +- Evidence locations (screenshots, test results if available) +- Any limitations due to degraded mode +- Next steps + +## Documentation Style Principles + +When reviewing documentation, apply these principles to avoid over-documentation: + +### Do NOT Recommend Adding: + +1. **Default values in prose** - Don't add sentences like "The default value is X" unless the default is surprising or essential for understanding. Users can check the API Reference for defaults. + +2. **Redundant enablement explanations** - If something is enabled by default, don't explain how to enable it. Only document how to disable. + + - Bad: "The toolbar can be enabled by setting `enabled: true`, or disabled by setting `enabled: false`" + - Good: "The toolbar is enabled by default. Use `enabled: false` to disable." + +3. **New sections for minor properties** - Not every property needs its own documentation section. Properties like `spacing`, `padding`, or simple numeric values are adequately covered in the API Reference. + +4. **Implementation details** - Don't document internal behaviour like callback return value handling, fallback mechanisms, or edge case handling unless it's essential for correct usage. + +5. **Verbose explanations** - Keep documentation concise. If something can be explained in one sentence, don't use three. + +### Example Titles + +Example titles should describe what the example demonstrates, not just the chart/component type: + +- Bad: "Overlapping Series" (describes the data, not the feature) +- Good: "Simple Highlight" (describes what's being demonstrated) +- Bad: "Bar Chart with Tooltip" (generic) +- Good: "Custom Tooltip Content" (specific feature demonstrated) + +### When Flagging Issues + +Only flag documentation as incomplete if: + +- A **primary feature** is undocumented (not minor styling properties) +- The documentation is **factually incorrect** (wrong API names, broken examples) +- An example **doesn't match** what the documentation claims it shows +- There's a **critical default** that affects common use cases + +Do NOT flag: + +- Missing documentation for every property (that's what API Reference is for) +- Missing default value mentions +- Opportunities to add more detail to already-clear explanations + +## Language Conventions + +Documentation must follow these spelling and language conventions: + +| Content Type | Language | Examples | +| ----------------------------- | ------------------------------------- | ------------------------------------------------------------- | +| **Documentation text** | UK/British English | colour, centre, behaviour, customisation, visualise, minimise | +| **Code comments in examples** | UK/British English | `// Customise the colour` | +| **API option names** | US English (as defined in TypeScript) | `color`, `center`, `behavior` | +| **JSDoc comments** | UK/British English | `/** Customises the series colour. */` | + +**Key Spelling Differences to Check**: + +- colour (UK) vs color (US) - use UK in prose, US in API names +- centre (UK) vs center (US) - use UK in prose, US in API names +- behaviour (UK) vs behavior (US) - use UK in prose +- customisation (UK) vs customization (US) - use UK +- visualise (UK) vs visualize (US) - use UK +- minimise (UK) vs minimize (US) - use UK +- licence (UK noun) vs license (US) - use UK in prose +- organisation (UK) vs organization (US) - use UK +- cancelled (UK) vs canceled (US) - use UK +- labelling (UK) vs labeling (US) - use UK + +**Grammar Checks**: + +- Consistent tense usage (prefer present tense) +- Subject-verb agreement +- Proper punctuation (Oxford comma optional but be consistent) +- Correct use of articles (a/an/the) +- No sentence fragments in explanatory text + +## Tool Usage by Phase + +| Phase | Required Tools | Mode-Dependent Tools | +| ------------------ | -------------- | ----------------------------------------------------------- | +| Phase 1 | Read, Write | - | +| Phase 2 (Full) | Read, Write | Task, claude-in-chrome (navigate, screenshot, interactions) | +| Phase 2 (Degraded) | Read, Write | - | +| Phase 3 | Read, Write | - | + +## Usage + +1. **Phase 1**: Provide page path → receive review plan +2. **Phase 2**: Provide page path → receive detailed report (with mode-appropriate validations) +3. **Phase 3**: Run after all pages reviewed → receive summary report diff --git a/external/ag-shared/prompts/commands/docs/_release-docs-review-core.md b/external/ag-shared/prompts/commands/docs/_release-docs-review-core.md new file mode 100644 index 00000000000..c46671ef5c1 --- /dev/null +++ b/external/ag-shared/prompts/commands/docs/_release-docs-review-core.md @@ -0,0 +1,941 @@ +# Release Documentation Review Core Methodology + +This file contains the shared release documentation review methodology. It is included by product-specific wrapper prompts that provide configuration (paths, page lists, branch patterns). + +## Required Product Configuration + +The product wrapper that references this file MUST define these sections (referenced by exact name below): + +- **Product** — product name, docs review command name +- **Paths** — docs root, types root, docs file pattern, examples pattern, types file pattern +- **Release Branch Pattern** — branch naming format, discovery command +- **Priority Pages** — high-priority and medium-priority page lists +- **Output Paths** — reports directory, filtered/complete task list paths, summary path +- **Verification Paths** — files to confirm exist after each page review + +## Help + +If the user provides a command option of `help`: + +- Explain how to use this prompt. +- Explain if they are missing any prerequisites or tooling requirements. +- DO NOT proceed, exit the prompt immediately after these steps. + +## Prerequisite - Determine Branches to Compare + +**Checklist:** + +- [ ] Are you given multiple branches as command options? + → Compare these two as previous and current releases respectively. +- [ ] Are you given a single branch as command option? + → Compare this branch against the highest-numbered release branch matching the **Release Branch Pattern**. +- [ ] Are you given no command options? + → Compare the current branch against the highest-numbered release branch, or the previous highest if already on a release branch. + +If you are uncertain about which branches to compare, **halt and ask the user to clarify before continuing.** + +## Task Overview + +Perform a comprehensive documentation review for all docs pages that have been modified or affected by API changes between release branches. This includes: + +1. Directly modified documentation pages +2. Pages containing modified examples +3. Pages referencing modified public APIs + +## Critical Requirements + +**READ THIS FIRST - MANDATORY EXECUTION RULES:** + +### Rule 1: Use SlashCommand Tool for ALL Reviews + +**MANDATORY:** When executing documentation reviews (Step 8), you MUST: + +- Use: `SlashCommand` tool with the docs review command specified in **Product** configuration +- DO NOT: Create custom review prompts for sub-agents +- DO NOT: Perform reviews manually +- DO NOT: Implement alternative review methods + +**Why:** The docs review command follows a standardised three-phase process with specific output formats and quality checks. Custom implementations bypass this standard. + +**Verification:** After each review, confirm the files specified in **Verification Paths** exist. + +### Rule 2: One Page Per Sub-Agent - NO BATCHING + +**MANDATORY:** When spawning sub-agents (Step 8), you MUST: + +- SPAWN: One sub-agent per documentation page +- DO NOT: Batch multiple pages into one sub-agent +- DO NOT: Ask one sub-agent to review 10, 20, or any N>1 pages +- DO NOT: Create "batches" for efficiency + +**Why:** Batching causes: + +1. Inconsistent review depth (later pages get less attention) +2. Context overflow and token exhaustion +3. Non-standard report locations and formats +4. Inability to track per-page completion + +**Correct Pattern:** + +``` +Task 1: Review page-name-1 → /docs-review page-name-1 +Task 2: Review page-name-2 → /docs-review page-name-2 +Task 3: Review page-name-3 → /docs-review page-name-3 +``` + +**Incorrect Pattern:** + +``` +Task 1: Review pages 1-20 → DO NOT DO THIS +``` + +### Rule 3: Parallel Execution for Efficiency + +**RECOMMENDED:** Launch sub-agents in parallel: + +- USE: Single message with multiple Task tool calls +- LAUNCH: 10-20 agents concurrently (one per page) +- MONITOR: Wait for all agents to complete before aggregating + +**Example:** For 50 pages, launch in 3 waves of ~17 agents each. + +## Workflow + +### Step 1: Identify Previous Release Branch + +Determine the previous release branch to compare against using the checklist above. + +Execute the discovery command from the **Release Branch Pattern** in the product configuration to list recent release branches, then store the branches for comparison: + +```bash +export PREVIOUS_BRANCH= +export CURRENT_BRANCH= +``` + +### Step 2: Identify Modified Documentation Pages + +Find all directly modified documentation files using the **Paths** → Docs file pattern from the product configuration: + +```bash +# Get list of modified docs files (substitute DOCS_PATH from product config Paths → Docs root) +git diff --name-only $PREVIOUS_BRANCH $CURRENT_BRANCH -- '${DOCS_PATH}/**/*.mdoc' | \ + grep -E '\.mdoc$' | \ + sed "s|${DOCS_PATH}/||" | \ + sed 's|/index\.mdoc$||' | \ + sort -u > modified-docs.txt + +echo "Found $(wc -l < modified-docs.txt) directly modified documentation pages" +``` + +> **Note**: Replace `${DOCS_PATH}` with the actual docs root path from the product configuration **Paths** section (e.g., `packages/ag-charts-website/src/content/docs`). + +### Step 3: Identify Modified Examples + +Find all modified examples and their associated documentation pages: + +```bash +# Get list of modified examples (substitute DOCS_PATH from product config) +git diff --name-only $PREVIOUS_BRANCH $CURRENT_BRANCH -- '${DOCS_PATH}/**/_examples/**' | \ + grep -E '/_examples/' | \ + sed "s|${DOCS_PATH}/||" | \ + sed 's|/_examples/.*||' | \ + sort -u > examples-docs.txt + +echo "Found $(wc -l < examples-docs.txt) docs pages with modified examples" +``` + +### Step 4: Identify Modified Public APIs + +Analyse changes to public APIs and find documentation pages that reference them: + +```bash +# Get list of modified type files (substitute TYPES_PATH from product config Paths → Types root) +git diff --name-only $PREVIOUS_BRANCH $CURRENT_BRANCH -- '${TYPES_PATH}/' | \ + grep -E '\.(ts|d\.ts)$' > modified-types.txt + +# Extract modified interface/type names +for file in $(cat modified-types.txt); do + git diff $PREVIOUS_BRANCH $CURRENT_BRANCH -- "$file" | \ + grep -E '^[+-](export )?(interface|type|class|enum) ' | \ + sed -E 's/^[+-](export )?(interface|type|class|enum) ([A-Za-z0-9]+).*/\3/' | \ + sort -u +done > modified-api-names.txt + +# For each modified API, find docs that reference it +> api-affected-docs.txt +for api_name in $(cat modified-api-names.txt); do + grep -r "$api_name" ${DOCS_PATH}/ --include="*.mdoc" | \ + sed "s|${DOCS_PATH}/||" | \ + sed 's|/index\.mdoc:.*||' | \ + sort -u >> api-affected-docs.txt +done + +sort -u api-affected-docs.txt -o api-affected-docs.txt +echo "Found $(wc -l < api-affected-docs.txt) docs pages referencing modified APIs" +``` + +### Step 5: Consolidate Affected Documentation Pages + +Combine all identified pages into a single list: + +```bash +cat modified-docs.txt examples-docs.txt api-affected-docs.txt | \ + sort -u > all-affected-docs.txt + +TOTAL_PAGES=$(wc -l < all-affected-docs.txt) +echo "Total documentation pages requiring review: $TOTAL_PAGES" + +# Keep examples-docs.txt and api-affected-docs.txt for filtering step +rm modified-docs.txt modified-types.txt modified-api-names.txt +``` + +### Step 6: Filter Trivial Changes + +Many documentation changes are trivial formatting standardisation that don't require review. Filter these out to focus on substantive changes. + +First, get diff statistics for all pages: + +```bash +# Get diff statistics for each page (substitute DOCS_PATH from product config) +cat all-affected-docs.txt | while read page; do + file="${DOCS_PATH}/${page}/index.mdoc" + stats=$(git diff $PREVIOUS_BRANCH $CURRENT_BRANCH --shortstat -- "$file" 2>/dev/null) + if [ -n "$stats" ]; then + insertions=$(echo "$stats" | grep -oE '[0-9]+ insertion' | grep -oE '[0-9]+' || echo "0") + deletions=$(echo "$stats" | grep -oE '[0-9]+ deletion' | grep -oE '[0-9]+' || echo "0") + total=$((insertions + deletions)) + echo "$total,$insertions,$deletions,$page" + fi +done | sort -t',' -k1 -nr > /tmp/diff-stats.txt +``` + +Then, categorise changes using Python. The script uses these variables from the product configuration: + +| Script Variable | Product Config Section | +| ---------------------- | ------------------------------------- | +| `DOCS_PATH` | **Paths** → Docs root | +| `high_priority_pages` | **Priority Pages** → High priority | +| `medium_priority_pages`| **Priority Pages** → Medium priority | +| `file_path` template | **Paths** → Docs root + `/{page}/index.mdoc` | +| Trivial-change patterns| Generic (no product substitution needed) | + +> **Substitute the values below** from the product configuration before executing. + +````python +#!/usr/bin/env python3 +import subprocess +import re + +categories = { + 'SKIP_TRIVIAL': [], + 'SKIP_TEST_NEW': [], + 'REVIEW_NEEDED': [] +} + +# Load pages with example changes +example_pages = set() +try: + with open('examples-docs.txt', 'r') as f: + example_pages = set(line.strip() for line in f if line.strip()) +except FileNotFoundError: + pass + +# Load pages with API changes +api_pages = set() +try: + with open('api-affected-docs.txt', 'r') as f: + api_pages = set(line.strip() for line in f if line.strip()) +except FileNotFoundError: + pass + +def calculate_risk_score(page, total_lines, has_examples, has_api_changes, file_path, prev_branch, curr_branch): + ''' + Calculate risk score (0-100) based on multiple factors. + Higher score = higher priority for review. + ''' + score = 0 + + # Factor 1: Change source (0-40 points) + if has_api_changes: + score += 25 # API changes are high risk + if has_examples: + score += 25 # Example changes are high risk + + # Factor 2: Change magnitude (0-25 points) + if total_lines > 200: + score += 25 + elif total_lines > 100: + score += 20 + elif total_lines > 50: + score += 15 + elif total_lines > 20: + score += 10 + elif total_lines > 0: + score += 5 + + # Factor 3: Content type analysis (0-20 points) + result = subprocess.run( + ['git', 'diff', prev_branch, curr_branch, '--', file_path], + capture_output=True, text=True + ) + diff = result.stdout + + # Check for high-risk content changes + if '{% warning %}' in diff or '{% note %}' in diff: + score += 8 # New warnings/notes are important + if re.search(r'\+##[^#]', diff): # New sections (but not subsections) + score += 12 # New major sections are high priority + + # Factor 4: Page importance (0-15 points) + # SUBSTITUTE these lists from the product configuration Priority Pages section + high_priority_pages = [ + # *** REPLACE with values from product config Priority Pages → High priority *** + ] + medium_priority_pages = [ + # *** REPLACE with values from product config Priority Pages → Medium priority *** + ] + + if any(p in page for p in high_priority_pages): + score += 15 + elif any(p in page for p in medium_priority_pages): + score += 10 + elif page.endswith('-test'): + score -= 10 # Test pages are lower priority + + return min(100, max(0, score)) # Clamp to 0-100 + +def is_trivial_change(page, file_path, prev_branch, curr_branch): + ''' + Determine if a change is trivial (formatting only). + Returns True only if ALL changes match known trivial patterns. + Errs on the side of reviewing when uncertain. + ''' + # First check: ignore whitespace-only changes + result_ws = subprocess.run( + ['git', 'diff', '-w', '--ignore-blank-lines', prev_branch, curr_branch, '--', file_path], + capture_output=True, text=True + ) + + # If no diff after ignoring whitespace, it's purely whitespace changes + if not result_ws.stdout.strip(): + return True + + # Get full diff for detailed analysis + result = subprocess.run( + ['git', 'diff', prev_branch, curr_branch, '--', file_path], + capture_output=True, text=True + ) + diff = result.stdout + + # Extract all added lines (excluding diff metadata) + added_lines = [] + for line in diff.split('\n'): + if line.startswith('+') and not line.startswith('+++'): + added_lines.append(line[1:]) # Remove the '+' prefix + + # If no added lines, skip (only deletions) + if not added_lines: + return False + + # Define trivial change patterns (whitelist approach) + trivial_patterns = [ + # Code block metadata additions + r'^\s*```\w+\s+format="snippet"\s*$', + r'^\s*```\w+\s+format="generated"\s*$', + # Frontmatter changes (within --- blocks at file start) + r'^\s*enterprise:\s*(true|false)\s*$', + r'^\s*title:\s*[\'"].*[\'\"]\s*$', + # Empty lines or pure whitespace + r'^\s*$', + # Opening/closing braces for code wrapping (but be conservative) + r'^\s*\{\s*$', + r'^\s*\}\s*$', + ] + + # Check if ALL added lines match trivial patterns + all_trivial = True + for added_line in added_lines: + line_is_trivial = False + for pattern in trivial_patterns: + if re.match(pattern, added_line): + line_is_trivial = True + break + + if not line_is_trivial: + all_trivial = False + break + + # Additional check: look for substantive content changes + has_substantive = ( + '##' in '\n'.join(added_lines) or # New sections + '{% warning %}' in diff or # New warnings + '{% note %}' in diff or # New notes + re.search(r'\+\s*[A-Z][a-z]+.*\w+', diff) # New prose content + ) + + return all_trivial and not has_substantive + +# SUBSTITUTE DOCS_PATH from the product configuration Paths → Docs root +DOCS_PATH = '*** REPLACE with Paths → Docs root from product config ***' + +with open('/tmp/diff-stats.txt', 'r') as f: + for line in f: + parts = line.strip().split(',') + total, insertions, deletions, page = int(parts[0]), int(parts[1]), int(parts[2]), parts[3] + + file_path = f"{DOCS_PATH}/{page}/index.mdoc" + + has_examples = page in example_pages + has_api = page in api_pages + + # CRITICAL: Pages with example or API changes must ALWAYS be reviewed + # even if their .mdoc files have trivial or no changes + if has_examples or has_api: + risk_score = calculate_risk_score(page, total, has_examples, has_api, file_path, '$PREVIOUS_BRANCH', '$CURRENT_BRANCH') + categories['REVIEW_NEEDED'].append((risk_score, total, page)) + # Categorize based on .mdoc file changes + elif page.endswith('-test') and deletions == 0: + # New test pages with no deletions + categories['SKIP_TEST_NEW'].append((0, total, page)) + elif is_trivial_change(page, file_path, '$PREVIOUS_BRANCH', '$CURRENT_BRANCH'): + # Only trivial formatting changes + categories['SKIP_TRIVIAL'].append((0, total, page)) + else: + # Everything else needs review (err on the side of reviewing) + risk_score = calculate_risk_score(page, total, has_examples, has_api, file_path, '$PREVIOUS_BRANCH', '$CURRENT_BRANCH') + categories['REVIEW_NEEDED'].append((risk_score, total, page)) + +# Output results sorted by risk score (descending) for REVIEW_NEEDED +for category, pages in categories.items(): + if category == 'REVIEW_NEEDED': + for risk_score, total, page in sorted(pages, key=lambda x: (-x[0], -x[1])): + print(f"{category},{risk_score},{total},{page}") + else: + for risk_score, total, page in pages: + print(f"{category},{risk_score},{total},{page}") +```` + +Save this as `/tmp/categorize.py` and run: + +```bash +python3 /tmp/categorize.py > /tmp/categorized-docs.txt + +# Generate summary with risk score statistics +echo "=== Filtering Results ===" +echo "SKIP_TRIVIAL: $(grep -c '^SKIP_TRIVIAL,' /tmp/categorized-docs.txt) pages" +echo "SKIP_TEST_NEW: $(grep -c '^SKIP_TEST_NEW,' /tmp/categorized-docs.txt) pages" +echo "REVIEW_NEEDED: $(grep -c '^REVIEW_NEEDED,' /tmp/categorized-docs.txt) pages" +echo "" +echo "=== Risk Score Distribution (REVIEW_NEEDED pages) ===" +echo "Critical (80-100): $(grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '$2 >= 80' | wc -l) pages" +echo "High (60-79): $(grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '$2 >= 60 && $2 < 80' | wc -l) pages" +echo "Medium (40-59): $(grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '$2 >= 40 && $2 < 60' | wc -l) pages" +echo "Low (0-39): $(grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '$2 < 40' | wc -l) pages" + +# Clean up tracking files after categorisation +rm examples-docs.txt api-affected-docs.txt +``` + +**Risk Scoring System (0-100):** + +Pages requiring review are prioritised by risk score based on multiple factors: + +1. **Change Source (0-40 points)**: + + - API changes: +25 points (may require doc updates) + - Example changes: +25 points (code users copy must be correct) + - Both API and examples: +50 points (maximum source risk) + +2. **Change Magnitude (0-25 points)**: + + - > 200 lines: +25 points + - 100-200 lines: +20 points + - 50-100 lines: +15 points + - 20-50 lines: +10 points + - <20 lines: +5 points + +3. **Content Type (0-20 points)**: + + - New major sections (`## Heading`): +12 points + - New warnings/notes: +8 points + +4. **Page Importance (0-15 points)**: + - High priority (from **Priority Pages** → High priority): +15 points + - Medium priority (from **Priority Pages** → Medium priority): +10 points + - Test pages: -10 points (lower priority) + +**Risk Levels**: + +- **Critical (80-100)**: Review first - high impact/high risk changes +- **High (60-79)**: Review early - significant changes or important pages +- **Medium (40-59)**: Standard review - moderate impact +- **Low (0-39)**: Review when time permits - minor changes or less critical pages + +**Trivial Change Detection Strategy:** + +The filtering uses a **whitelist approach** with the following criteria: + +1. **Example/API Change Override**: Pages with modified examples or API references are **ALWAYS** marked for review (with appropriate risk score), regardless of their docs file changes. This is critical because: + + - Example changes may affect documentation accuracy even without docs file changes + - API changes may require documentation updates even if not yet applied + - These changes indicate functional updates that need validation + +2. **Whitespace-only changes**: For remaining pages, diff with `-w --ignore-blank-lines` to catch pure whitespace + +3. **Known trivial patterns**: ALL added lines must match one of: + + - Code block metadata: `format="snippet"` or `format="generated"` attributes + - Frontmatter changes: `enterprise:`, `title:` fields + - Structural wrapping: `{` or `}` for code snippet wrapping + - Empty lines + +4. **No substantive content**: Must not contain: + - New sections (`##` headings) + - New warnings or notes (`{% warning %}`, `{% note %}`) + - New prose content (sentences starting with capital letters) + +**Conservative Approach**: If **any** line doesn't match a known trivial pattern, the page is marked for review. This errs on the side of over-reviewing rather than missing substantive changes. + +**Test Page Pattern:** + +- Pages ending in `-test` with only additions (no deletions) +- These are new test documentation pages that don't affect main documentation + +### Step 7: Create Review Task Lists + +Generate two task lists using the paths from **Output Paths** in the product configuration: + +1. **Filtered list**: Only pages needing substantive review +2. **Complete list**: All modified pages (for reference) + +> **Note**: In the bash below, substitute `${FILTERED_LIST}` and `${COMPLETE_LIST}` with the filtered and complete task list paths from **Output Paths**. Substitute `${DOCS_PATH}` with the docs root from **Paths**. Substitute the `/docs-review` command references with the docs review command from **Product** configuration. + +**Create the filtered list:** + +```bash +cat > ${FILTERED_LIST} << EOF +# Release Documentation Review (Filtered) + +## Summary + +- Previous Release: ${PREVIOUS_BRANCH} +- Current Release: ${CURRENT_BRANCH} +- Total Pages Modified: $(wc -l < all-affected-docs.txt) +- Pages Requiring Review: $(grep -c '^REVIEW_NEEDED,' /tmp/categorized-docs.txt) +- Pages Skipped (Trivial): $(grep -c '^SKIP_TRIVIAL,' /tmp/categorized-docs.txt) +- Pages Skipped (Test): $(grep -c '^SKIP_TEST_NEW,' /tmp/categorized-docs.txt) + +## Risk Score Distribution + +Pages are prioritised by risk score (0-100) based on: +- Change source (examples/API changes) +- Change magnitude (lines changed) +- Content type (new sections, warnings) +- Page importance (from **Priority Pages** configuration) + +- **Critical (80-100)**: $(grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '\$2 >= 80' | wc -l) pages +- **High (60-79)**: $(grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '\$2 >= 60 && \$2 < 80' | wc -l) pages +- **Medium (40-59)**: $(grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '\$2 >= 40 && \$2 < 60' | wc -l) pages +- **Low (0-39)**: $(grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '\$2 < 40' | wc -l) pages + +## Skipped Pages + +### Trivial Formatting Changes +EOF + +grep '^SKIP_TRIVIAL,' /tmp/categorized-docs.txt | cut -d',' -f4 | while read page; do + echo "- ${page}" >> ${FILTERED_LIST} +done + +cat >> ${FILTERED_LIST} << 'EOF' + +### New Test Pages +EOF + +grep '^SKIP_TEST_NEW,' /tmp/categorized-docs.txt | cut -d',' -f4 | while read page; do + echo "- ${page}" >> ${FILTERED_LIST} +done + +cat >> ${FILTERED_LIST} << 'EOF' + +## Pages Requiring Review + +Review in order of risk score (highest priority first). + +### Critical Priority (Risk: 80-100) + +EOF + +# Add critical risk pages +grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '$2 >= 80' | while IFS=',' read category risk_score total_lines page; do + cat >> ${FILTERED_LIST} << ENDTASK +- [ ] **${page}** - Risk: ${risk_score} (${total_lines} lines) + - Command: \`/docs-review ${DOCS_PATH}/${page}/index.mdoc\` + +ENDTASK +done + +cat >> ${FILTERED_LIST} << 'EOF' + +### High Priority (Risk: 60-79) + +EOF + +# Add high risk pages +grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '$2 >= 60 && $2 < 80' | while IFS=',' read category risk_score total_lines page; do + cat >> ${FILTERED_LIST} << ENDTASK +- [ ] **${page}** - Risk: ${risk_score} (${total_lines} lines) + - Command: \`/docs-review ${DOCS_PATH}/${page}/index.mdoc\` + +ENDTASK +done + +cat >> ${FILTERED_LIST} << 'EOF' + +### Medium Priority (Risk: 40-59) + +EOF + +# Add medium risk pages +grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '$2 >= 40 && $2 < 60' | while IFS=',' read category risk_score total_lines page; do + cat >> ${FILTERED_LIST} << ENDTASK +- [ ] **${page}** - Risk: ${risk_score} (${total_lines} lines) + - Command: \`/docs-review ${DOCS_PATH}/${page}/index.mdoc\` + +ENDTASK +done + +cat >> ${FILTERED_LIST} << 'EOF' + +### Low Priority (Risk: 0-39) + +EOF + +# Add low risk pages +grep '^REVIEW_NEEDED,' /tmp/categorized-docs.txt | awk -F',' '$2 < 40' | while IFS=',' read category risk_score total_lines page; do + cat >> ${FILTERED_LIST} << ENDTASK +- [ ] **${page}** - Risk: ${risk_score} (${total_lines} lines) + - Command: \`/docs-review ${DOCS_PATH}/${page}/index.mdoc\` + +ENDTASK +done +``` + +**Create the complete list** (for reference): + +```bash +# Create complete unfiltered task list (substitute COMPLETE_LIST and DOCS_PATH from product config) +cat all-affected-docs.txt | while read page; do + cat >> ${COMPLETE_LIST} << ENDTASK +- [ ] **${page}** + - Command: \`/docs-review ${DOCS_PATH}/${page}/index.mdoc\` + +ENDTASK +done +``` + +**Use the filtered list** for actual reviews unless there's reason to suspect the filtering missed something important. + +### Step 8: Execute Documentation Reviews + +**CRITICAL REQUIREMENTS:** + +1. **ONE PAGE PER SUB-AGENT - NO BATCHING** +2. **MUST USE SlashCommand TOOL - NO CUSTOM IMPLEMENTATIONS** + +#### Execution Pattern + +For each documentation page in the **filtered task list**, spawn a dedicated sub-agent that: + +1. **Reviews exactly ONE page** (no batching allowed) +2. **Uses the SlashCommand tool** to invoke the docs review command from **Product** configuration +3. **Validates the review outputs** (files specified in **Verification Paths**) + +#### Sub-Agent Prompt Template + +Each sub-agent must receive this strict prompt (substitute product-specific values): + +``` +You are reviewing the [PAGE_NAME] documentation page for [PRODUCT_NAME] release from [PREVIOUS_BRANCH] to [CURRENT_BRANCH]. + +STRICT REQUIREMENTS: + +1. MANDATORY: Use SlashCommand tool to invoke the docs review command + - Execute: SlashCommand with command "/docs-review [DOCS_PATH]/[PAGE_NAME]/index.mdoc" + - DO NOT create custom review implementations + - DO NOT perform manual reviews + - DO NOT skip the SlashCommand tool + +2. Single Page Only + - Review ONLY: [PAGE_NAME] + - DO NOT review multiple pages + - DO NOT batch reviews + +3. Verify Outputs + - After the review completes, confirm the verification files exist + - If files are missing, report failure + +4. Return Summary + - Report review status (PASSED / ISSUES FOUND / FAILED) + - List critical issues if any found + - Confirm SlashCommand was used + +Execute the review now. +``` + +#### Parallel Execution Strategy + +Launch all sub-agents in parallel for maximum efficiency: + +1. **Read the filtered task list** to get all pages requiring review +2. **Create one Task per page** in a single message (multiple tool calls) +3. **Monitor completion** of all agents +4. **Validate outputs** from each agent + +#### Implementation Example + +```bash +# Read filtered task list (substitute the filtered task list path from Output Paths) +PAGES=$(grep "^- \[ \]" ${FILTERED_LIST} | \ + grep -oP '\*\*\K[^*]+' | head -10) + +# For each page, spawn a dedicated agent +# (In practice, you'll use multiple Task tool calls in one message) +``` + +#### Validation After Execution + +After all agents complete, verify: + +1. Each agent used SlashCommand tool (check agent outputs) +2. Each page has the review files specified in **Verification Paths** +3. No agent reviewed multiple pages (batching violation) + +**If validation fails:** + +- Identify which pages were not properly reviewed +- Identify which agents didn't use SlashCommand +- Re-launch failed agents with corrected strict prompts + +#### Handling Large Page Counts + +For releases with many pages (e.g., 50+): + +1. **Launch in waves** if needed (e.g., 20 agents at a time) +2. **Prioritise by risk score** (Critical → High → Medium → Low) +3. **Monitor token usage** across agents +4. **Wait for wave completion** before launching next wave + +**Note**: Use the filtered list by default. Only review skipped pages if: + +- You find issues in related pages that suggest skipped pages may be affected +- The release is high-risk and requires maximum thoroughness +- Stakeholders specifically request full coverage + +### Step 9: Generate Summary Report + +After all individual page reviews are complete: + +1. **Aggregate findings** from all technical review reports +2. **Identify patterns** across multiple pages +3. **Prioritise issues** by severity and frequency +4. **Generate release readiness assessment** + +Write the final summary to the summary path from **Output Paths** in the product configuration. + +## Output Format + +### Filtered Task List (Primary Output) + +Write to the filtered task list path from **Output Paths**. + +This is the **primary list to use for reviews**. It excludes trivial formatting changes and focuses on substantive updates. + +Format: + +```markdown +# Release Documentation Review (Filtered) + +## Summary + +- Previous Release: ${PREVIOUS_BRANCH} +- Current Release: ${CURRENT_BRANCH} +- Total Pages Modified: ${TOTAL_MODIFIED} +- Pages Requiring Review: ${REVIEW_NEEDED_COUNT} +- Pages Skipped (Trivial): ${SKIP_TRIVIAL_COUNT} +- Pages Skipped (Test): ${SKIP_TEST_COUNT} + +## Risk Score Distribution + +Pages are prioritised by risk score (0-100) based on: +- Change source (examples/API changes) +- Change magnitude (lines changed) +- Content type (new sections, warnings) +- Page importance (from **Priority Pages** configuration) + +- **Critical (80-100)**: ${CRITICAL_COUNT} pages +- **High (60-79)**: ${HIGH_COUNT} pages +- **Medium (40-59)**: ${MEDIUM_COUNT} pages +- **Low (0-39)**: ${LOW_COUNT} pages + +## Analysis Summary + +After analysing all modified documentation pages, categorised by change type. + +### Skipped: Trivial Formatting Changes + +These pages only have code formatting standardisation: + +- page-1 +- page-2 + +### Skipped: New Test Pages + +New test documentation pages (all additions): + +- test-page-1 +- test-page-2 + +### Pages Requiring Review + +- [ ] **page-name**: /docs-review command... +``` + +### Complete Task List (Reference) + +Write to the complete task list path from **Output Paths**. + +This list includes **all** modified pages without filtering. Use for reference or when maximum thoroughness is required. + +Format: + +```markdown +# Release Documentation Review Tasks (Complete) + +## Summary + +- Previous Release: ${PREVIOUS_BRANCH} +- Current Release: ${CURRENT_BRANCH} +- Total Pages Modified: ${TOTAL_PAGES} +- Direct Doc Changes: ${DIRECT_COUNT} +- Example Changes: ${EXAMPLE_COUNT} +- API Impact: ${API_COUNT} + +## All Modified Pages + +- [ ] page-1: /docs-review command... +- [ ] page-2: /docs-review command... +``` + +### Final Summary Report + +Write to the summary path from **Output Paths**. + +Format: + +```markdown +# Release Documentation Review Summary + +## Executive Summary + +[Brief overview of documentation status for release] + +## Review Statistics + +- Total Pages Reviewed: X +- Pages with Issues: Y +- Critical Issues: Z +- Estimated Fix Time: N hours + +## Critical Issues Requiring Immediate Fix + +[List of blocking documentation issues] + +## Pattern Analysis + +[Common issues found across multiple pages] + +## Recommendations + +[Prioritised list of documentation updates needed] + +## Release Risk Assessment + +[Overall documentation readiness: Low/Medium/High] +``` + +## Important Considerations + +1. **Filtering and Triage**: + + - **Use the filtered list by default** - it uses risk scoring + conservative whitelist approach + - **Risk scores (0-100) prioritise review order**: Critical (80-100) → High (60-79) → Medium (40-59) → Low (0-39) + - **Example/API changes override all filtering**: Pages with modified examples or API references are ALWAYS reviewed, even if docs file changes are trivial or absent + - **Whitespace-only changes** are automatically detected with `git diff -w` + - **Trivial patterns** are explicitly whitelisted (code block metadata, frontmatter, structural wrapping) + - **Conservative by design**: Any line that doesn't match a known trivial pattern → mark for review + - New test pages (ending in `-test` with only additions) are skipped by default + - If filtering reduces review load by >20%, mention this efficiency gain in reports + - Review skipped pages only if issues found in related pages suggest impact + +2. **Change Impact Analysis with Risk Scoring**: + + - **Change source** (40 points max): Example/API changes score highest + - **Change magnitude** (25 points max): Larger changes = higher risk + - **Content type** (20 points max): New sections/warnings increase risk + - **Page importance** (15 points max): Pages listed in **Priority Pages** score highest + - **Combined scores** guide review order - tackle high-risk pages first + - Pages with >100 lines changed likely have major feature updates + +3. **Review Scope Management**: + + - For large releases (>50 pages after filtering), use wave-based parallelism (launch multiple single-page sub-agents concurrently) rather than batching + - Prioritise based on user-facing importance (getting started, key features, upgrade guides) + - Start with largest changes first (they often reveal patterns) + - Skip generated or auto-updated sections if identified + +4. **Quality Gates**: + - All HIGH priority issues must be resolved before release + - MEDIUM priority issues should be documented in release notes + - LOW priority issues can be addressed post-release + - Formatting-only changes don't block releases + +## Execution Tips + +### Mandatory Practices + +- **Always run the filtering step (Step 6)** before creating task lists - it typically saves 20-40% of review effort +- **ONE PAGE PER SUB-AGENT** - Never batch multiple pages into a single agent +- **USE SlashCommand TOOL** - Always invoke the docs review command via SlashCommand, never create custom review implementations +- **Validate outputs** - After reviews, confirm plan and report files exist at standard locations + +### Performance Optimisation + +- **Launch agents in parallel** - Use single message with multiple Task tool calls for maximum efficiency +- **Batch API calls, not reviews** - Launch 10-20 agents concurrently, but each reviews only ONE page +- **Monitor completion** - Track which agents finish and which pages remain +- **Cache git operations** - Store diff results in `/tmp/diff-stats.txt` and `/tmp/categorized-docs.txt` + +### Quality Assurance + +- **Verify SlashCommand usage** - Check agent outputs confirm they used SlashCommand tool +- **Validate standard outputs** - Ensure review plan and report files are in correct locations +- **Check for batching violations** - Confirm no agent reviewed multiple pages +- **Example validation** - If Step 3 found example changes, verify examples run correctly before reviewing docs + +### Progress Tracking + +- **Provide status updates** - Report completion progress for long-running reviews (e.g., "35/77 pages reviewed") +- **Identify failures early** - If agents don't use SlashCommand or output goes to wrong location, stop and fix +- **Wave-based execution** - For 50+ pages, review in waves of 15-20 agents each + +## Benefits of Filtering + +Typical filtering results across releases: + +- **Trivial formatting changes**: 5-10% of modified pages (pure formatting standardisation) +- **New test pages**: 10-15% of modified pages (test documentation that doesn't affect main docs) +- **Total efficiency gain**: 20-30% reduction in review workload while maintaining quality + +Remember: This comprehensive review with intelligent filtering ensures documentation quality matches code quality for the release while optimising reviewer time. diff --git a/external/ag-shared/prompts/guides/docs-review-integration.md b/external/ag-shared/prompts/guides/docs-review-integration.md new file mode 100644 index 00000000000..93f2d3be7a3 --- /dev/null +++ b/external/ag-shared/prompts/guides/docs-review-integration.md @@ -0,0 +1,238 @@ +# Docs Review Integration Guide + +This guide covers how to integrate the shared docs review methodology into any AG product repo (ag-grid, ag-studio, etc.). + +## Architecture + +The shared docs review uses an **inverted reference pattern** (same as PR reviews): + +- **Core files** (`external/ag-shared/prompts/commands/docs/_docs-review-core.md` and `_release-docs-review-core.md`): Generic workflow methodology, report structure, quality standards +- **Thin wrappers** (plain files in each repo's `.rulesync/commands/`): Product configuration block + single reference line to the core + +The LLM reads the product configuration first, then follows the core methodology, substituting product values at runtime. No build-time templating required. + +## Prerequisites + +- The repo already uses `external/ag-shared/` as a git subrepo +- The repo already uses rulesync and the `setup-prompts.sh` pipeline +- No private prompts repo is required — all product-specific files are plain files in `.rulesync/` + +## Step 1: Pull the Latest ag-shared + +```bash +git subrepo pull external/ag-shared +``` + +This brings in the core files: + +- `external/ag-shared/prompts/commands/docs/_docs-review-core.md` +- `external/ag-shared/prompts/commands/docs/_release-docs-review-core.md` + +## Step 2: Create the Product Configuration Wrapper for `/docs-review` + +Create `.rulesync/commands/docs-review.md` as a plain file. Sections marked with a "Required" note must be present — the core methodology references these by exact name. + +```markdown +--- +targets: ['*'] +description: 'Review documentation pages for technical accuracy and example consistency' +--- + +# Documentation Review - [Product Name] + +You are a technical documentation reviewer for [Product Name]. + +## Product Configuration + +### Input Requirements +> Required — referenced by exact name in the core methodology. + +User provides: +- Documentation page path: `[path to docs]/${pageName}/index.mdoc` +- Live dev URL: `[dev server URL]/${pageName}/` + +### Orchestration Indicator +- Orchestrator script: `[path to orchestrator, if any]` + +### File Resolution Rules +> Required — referenced by exact name in the core methodology. + +Map documentation references to TypeScript definition files: + +| If docs mention | Then check file | +| ------------------------- | ---------------------------------------- | +| [Pattern for your product] | [Path to types package] | +| Interface name like `XxxOptions` | Search `[types path]/**/*` | +| Generic config property | `[path to main options file]` | + +### Implementation Resolution Rules +> Required — referenced by exact name in the core methodology. + +Map features to source implementation files: + +| Feature Category | Implementation Path Pattern | +| ---------------- | --------------------------- | +| [Your feature categories] | [Your source paths] | + +### Example Path Pattern +> Required — referenced by exact name in the core methodology. + +`[docs path]/${pageName}/_examples/${exampleName}/` +- Required: `main.ts` +- Optional: `data.ts`, `styles.css` + +### Exceptions File Path +`[docs path]/${pageName}/technical-review-exceptions.md` + +### Output Paths +> Required — referenced by exact name in the core methodology. + +- Review plans: `[plans path]/${pageName}.md` +- Reports: `[docs path]/${pageName}/reports/technical-review-report.md` +- Summary: `reports/docs-review/summary.md` + +### Default Value Verification Hierarchy +> Required — referenced by exact name in the core methodology. +> Describe how default values are resolved in your product's architecture. +> For AG Charts this is: Module theme template -> @Property decorator -> TypeScript comments +> For AG Grid this might be: ColDef defaults -> GridOptions defaults -> etc. + +### Product-Specific Conventions +- [List any API conventions specific to your product] +- [Known accepted patterns that should NOT be flagged] + +## Review Methodology + +**Read and follow all instructions in `external/ag-shared/prompts/commands/docs/_docs-review-core.md` for the review process, applying the product configuration above.** +``` + +## Step 3: Create the Product Configuration Wrapper for `/release-docs-review` + +Create `.rulesync/commands/release-docs-review.md` as a plain file: + +```markdown +--- +targets: ['*'] +description: 'Review all documentation changes between releases for accuracy and completeness' +--- + +# Release Documentation Review - [Product Name] + +You are an expert documentation reviewer for [Product Name], specialising in +release documentation validation. + +## Product Configuration + +### Product +> Required — referenced by exact name in the core methodology. + +- Name: [Product Name] +- Docs review command: `/docs-review` + +### Paths +> Required — referenced by exact name in the core methodology. +- Docs root: `[path to docs content]` +- Types root: `[path to public types package]` +- Docs file pattern: `[docs root]/**/*.mdoc` +- Examples pattern: `[docs root]/**/_examples/` +- Types file pattern: `[types root]/**/*.ts` + +### Release Branch Pattern +> Required — referenced by exact name in the core methodology. + +- Format: `[your branch naming pattern, e.g. origin/bX.Y.Z]` +- Discovery command: `[shell command to list recent release branches]` + +### Priority Pages +> Required — referenced by exact name in the core methodology. + +**High priority** (getting started, key features, upgrade guides): +`[comma-separated list of high-priority page names]` + +**Medium priority** (core features): +`[comma-separated list of medium-priority page names]` + +### Output Paths +> Required — referenced by exact name in the core methodology. + +- Reports directory: `reports/` +- Filtered task list: `reports/release-docs-review-${PREVIOUS}-${CURRENT}-filtered.md` +- Complete task list: `reports/release-docs-review-${PREVIOUS}-${CURRENT}-tasks.md` +- Summary: `reports/release-docs-review-${PREVIOUS}-${CURRENT}-summary.md` + +### Verification Paths +After each page review, confirm these files exist: +- `[plans path]/${pageName}.md` +- `[docs path]/${pageName}/reports/technical-review-report.md` + +## Review Methodology + +**Read and follow all instructions in `external/ag-shared/prompts/commands/docs/_release-docs-review-core.md` for the review process, applying the product configuration above.** +``` + +## Step 4: Create Product-Specific Documentation Rules + +The shared core handles the review workflow, but each product needs rules describing its documentation conventions. Create these in `.rulesync/rules/` (or `external/prompts/guides/`). + +**Recommended supporting rules:** + +These files are not referenced by the review core but provide context that improves review quality. + +- **`docs-pages.md`** — comprehensive guide to your documentation system: page types and standard structures, Markdoc/MDX component reference, content patterns, example integration conventions +- **`docs-checklist.md`** — pre-submission quality checklist: content structure requirements, technical accuracy checks, example validation commands, framework compatibility checks + +**Optional:** + +- **`playbook-docs.md`** — quick-reference workflow for docs changes +- **`example-tester.md`** (sub-agent in `.rulesync/subagents/`) — if you want Full Mode interactive review (build, test, and validate examples). Without this, `/docs-review` operates in Adaptive/Degraded mode (static analysis only), which is still useful. + +Use the AG Charts versions as a reference for structure and level of detail. + +## Step 5: Verify File Placement + +Confirm the files are in the correct locations (all plain files, no symlinks needed): + +``` +.rulesync/ +ā”œā”€ā”€ commands/ +│ ā”œā”€ā”€ docs-review.md ← plain file (Step 2) +│ └── release-docs-review.md ← plain file (Step 3) +└── rules/ + ā”œā”€ā”€ docs-pages.md ← plain file (Step 4) + └── docs-checklist.md ← plain file (Step 4) +``` + +## Step 6: Regenerate Tool Config + +```bash +./external/ag-shared/scripts/setup-prompts/setup-prompts.sh +``` + +Verify the commands appear in `.claude/commands/`: + +```bash +ls -la .claude/commands/docs-review.md .claude/commands/release-docs-review.md +``` + +## Step 7: Test + +1. Run `/docs-review [path-to-a-docs-page]` and confirm the LLM reads both your wrapper and the core +2. Check that the report references your product's file paths (not AG Charts paths) +3. Run `/release-docs-review` with two release branches and confirm page discovery works with your paths + +## Adoption Checklist + +- [ ] `git subrepo pull external/ag-shared` completed +- [ ] `.rulesync/commands/docs-review.md` created as plain file with all REQUIRED sections +- [ ] `.rulesync/commands/release-docs-review.md` created as plain file with all REQUIRED sections +- [ ] `.rulesync/rules/docs-pages.md` created (or equivalent) +- [ ] `.rulesync/rules/docs-checklist.md` created (or equivalent) +- [ ] `setup-prompts.sh` run successfully +- [ ] `/docs-review` tested on a real page +- [ ] `/release-docs-review` tested between two branches + +## Notes + +- **Batch orchestration**: The core methodology covers single-page and release-diff reviews. Batch orchestration across all pages (like AG Charts' `run-docs-review.js`) is product-specific. Start with manual page-by-page invocation. +- **Estimated adoption effort**: ~2-3 hours per repo. +- **Section names are normative**: The core references wrapper sections by exact name. Renaming sections in either wrapper or core silently breaks cross-referencing. diff --git a/external/ag-shared/prompts/guides/nx-conventions.md b/external/ag-shared/prompts/guides/nx-conventions.md new file mode 100644 index 00000000000..79d4d7c0962 --- /dev/null +++ b/external/ag-shared/prompts/guides/nx-conventions.md @@ -0,0 +1,12 @@ +--- +targets: ['*'] +description: 'Nx project configuration conventions' +globs: ['**/project.json', 'nx.json'] +--- + +# Nx Conventions + +## Project Configuration + +- In `project.json` files, use `{projectRoot}` to reference paths relative to the project root. Do **not** prefix with `./` — Nx interpolates `{projectRoot}` as a complete relative path from the workspace root (e.g. `tools/ssr-example`), so `{projectRoot}/script.sh` is correct while `./{projectRoot}/script.sh` is invalid. +- To prevent Nx from inferring targets from npm scripts in a `package.json`-only project, set `"nx": {"includedScripts": []}` in the package.json. An empty array suppresses all inferred script targets while keeping the project visible to Nx for dependency tracking. diff --git a/external/ag-shared/prompts/guides/setup-prompts.md b/external/ag-shared/prompts/guides/setup-prompts.md index 63d69a033f3..21230659ba9 100644 --- a/external/ag-shared/prompts/guides/setup-prompts.md +++ b/external/ag-shared/prompts/guides/setup-prompts.md @@ -70,3 +70,15 @@ npx patch-package rulesync ``` This updates `patches/rulesync+*.patch` (which symlinks to `external/ag-shared/prompts/patches/`). + +## Shared Prompt Conventions + +When using the inverted reference pattern (shared core + thin wrapper), follow these rules to avoid silent breakage. + +### Section names are normative contracts + +Heading names in the wrapper are referenced by exact name in the core. Adding suffixes like `(REQUIRED)` or other annotations to heading text in templates or guides will break the exact-name contract silently at runtime. Treat section headings as API identifiers — rename them only with the same care as renaming a function. + +### Preserve executable content during genericisation + +When migrating product-specific prompts to shared cores, bash scripts and procedural subsections are often generic despite appearing product-specific (they only contain path references that can be parameterised). Always diff the original vs genericised output section-by-section to catch accidentally dropped content. diff --git a/external/ag-shared/prompts/guides/website-astro-pages.md b/external/ag-shared/prompts/guides/website-astro-pages.md new file mode 100644 index 00000000000..a14e984055b --- /dev/null +++ b/external/ag-shared/prompts/guides/website-astro-pages.md @@ -0,0 +1,334 @@ +--- +targets: ['*'] +description: 'Astro page creation patterns, layout props, content collections, and code conventions for AG product websites' +globs: + [ + '**/src/pages/**/*.astro', + '**/src/layouts/**/*.astro', + ] +--- + +# Website Astro Pages Guide + +This guide covers Astro page creation patterns, layout props, content collections, and code conventions for AG product websites. + +## Project Overview + +- **Framework**: Astro 5 with React 18 for interactive components +- **Styling**: SCSS with CSS Modules + shared design system +- **Package Manager**: Yarn +- **Monorepo**: Nx-managed +- **Shared Components**: `external/ag-website-shared/src/components/` + +## Directory Structure + +``` +packages//src/ +ā”œā”€ā”€ pages/ # Astro page routes (*.astro files) +ā”œā”€ā”€ layouts/ +│ └── Layout.astro # Main page layout wrapper +ā”œā”€ā”€ components/ # Local React & Astro components +ā”œā”€ā”€ content/ # Content collections (data, docs) +ā”œā”€ā”€ pages-styles/ # Page-specific SCSS modules +ā”œā”€ā”€ stores/ # Nanostores (state management) +└── utils/ # Utility functions + +external/ag-website-shared/src/ +ā”œā”€ā”€ components/ # Shared components across AG products +│ ā”œā”€ā”€ license-pricing/ # Pricing page components +│ ā”œā”€ā”€ changelog/ # Changelog/pipeline components +│ ā”œā”€ā”€ community/ # Community page components +│ ā”œā”€ā”€ whats-new/ # Release notes components +│ ā”œā”€ā”€ landing-pages/ # Landing page components +│ ā”œā”€ā”€ footer/ # Footer component +│ ā”œā”€ā”€ site-header/ # Site header component +│ └── ... # Many more shared components +└── design-system/ # Design tokens and base styles +``` + +## Creating Standard Astro Pages + +### Page Location + +All pages go in `packages//src/pages/`. The file path determines the URL: + +| File Path | URL | +|-----------|-----| +| `pages/index.astro` | `/` | +| `pages/pricing.astro` | `/pricing` | +| `pages/community.astro` | `/community` | +| `pages/community/events.astro` | `/community/events` | + +### Pattern 1: Full Custom Page with Layout + +Use this for pages that need custom content and styling. + +```astro +--- +import Layout from '@layouts/Layout.astro'; +import styles from '@pages-styles/my-page.module.scss'; +import { urlWithBaseUrl } from '@utils/urlWithBaseUrl'; + +// Optional: Fetch data from content collections +import { getEntry, type CollectionEntry } from 'astro:content'; +const { data: navData } = (await getEntry('docsNav', 'nav')) as CollectionEntry<'docsNav'>; +--- + + +
+

My Page

+

Content goes here

+
+
+``` + +### Pattern 2: Wrapper Page (Delegates to Shared Component) + +Use this when a shared component handles all the page logic. + +```astro +--- +import { getEntry, type CollectionEntry } from 'astro:content'; +import WhatsNew from '@ag-website-shared/components/whats-new/pages/whats-new.astro'; + +// Note: version entry keys are product-specific (e.g. 'ag-grid-versions', 'ag-charts-versions') +const { data: versionsData } = (await getEntry('versions', '-versions')) as CollectionEntry<'versions'>; +const { data: docsNavData } = (await getEntry('docsNav', 'nav')) as CollectionEntry<'docsNav'>; +--- + + + +``` + +### Pattern 3: Minimal Wrapper (Simplest) + +For pages that just render a shared component with no data. + +```astro +--- +import Home from '@ag-website-shared/components/community/pages/home.astro'; +--- + + +``` + +### Pattern 4: Page with React Components + +Use this for interactive pages with client-side functionality. + +```astro +--- +import { LicensePricing } from '@ag-website-shared/components/license-pricing/LicensePricing'; +import Layout from '@layouts/Layout.astro'; +--- + + + + +``` + +### Pattern 5: Page with Docs Navigation + +Use this for pages that should show the documentation sidebar. + +```astro +--- +import { getEntry, type CollectionEntry } from 'astro:content'; +import Layout from '@layouts/Layout.astro'; +import { getFrameworkFromPath } from '@components/docs/utils/urlPaths'; +import { Pipeline } from '@ag-website-shared/components/changelog/Pipeline'; +import { DocsNavFromLocalStorage } from '@ag-website-shared/components/docs-navigation/DocsNavFromLocalStorage'; +import styles from '@ag-website-shared/components/changelog/changelog.module.scss'; +import classnames from 'classnames'; + +const path = Astro.url.pathname; +const framework = getFrameworkFromPath(path); +const { data: docsNavData } = (await getEntry('docsNav', 'nav')) as CollectionEntry<'docsNav'>; +--- + + +
+ + +
+

Pipeline

+ + +
+
+
+``` + +### Pattern 6: Standalone Page (No Layout) + +For special pages like demos that need full control. + +```astro +--- +import HeroDashboard from '@components/hero-dashboard/HeroDashboard.astro'; +import '@pages-styles/scratchpad.scss'; +import '@pages-styles/example-controls.css'; +--- + + + + + <Product Name> + + + + + +``` + +## Layout Component Props + +The `Layout.astro` component accepts these props: + +| Prop | Type | Default | Description | +|------|------|---------|-------------| +| `title` | string | required | Page title (shown in browser tab) | +| `description` | string | metadata default | SEO meta description | +| `showSearchBar` | boolean | undefined | Show search in header | +| `showDocsNav` | boolean | undefined | Show docs navigation toggle | +| `hideHeader` | boolean | false | Hide the site header | +| `hideFooter` | boolean | false | Hide the site footer | + +## Content Collections + +Fetch data from content collections using Astro's `getEntry`: + +```astro +--- +import { getEntry, type CollectionEntry } from 'astro:content'; + +// Navigation data +const { data: docsNavData } = (await getEntry('docsNav', 'nav')) as CollectionEntry<'docsNav'>; +const { data: apiNavData } = (await getEntry('apiNav', 'nav')) as CollectionEntry<'apiNav'>; + +// Version data (entry key is product-specific, e.g. 'ag-grid-versions', 'ag-charts-versions') +const { data: versionsData } = (await getEntry('versions', '-versions')) as CollectionEntry<'versions'>; + +// Footer data +const { data: footerItems } = (await getEntry('footer', 'footer')) as CollectionEntry<'footer'>; + +// Metadata +const { data: metadata } = (await getEntry('metadata', 'metadata')) as CollectionEntry<'metadata'>; +--- +``` + +## Available Shared Components + +Key components from `@ag-website-shared/components/`: + +| Component | Import Path | Purpose | +|-----------|-------------|---------| +| `LicensePricing` | `license-pricing/LicensePricing` | Pricing page | +| `Pipeline` | `changelog/Pipeline` | Development pipeline | +| `WhatsNew` | `whats-new/pages/whats-new.astro` | Release notes | +| `DocsNavFromLocalStorage` | `docs-navigation/DocsNavFromLocalStorage` | Docs sidebar | +| `FrameworkTextAnimation` | `framework-text-animation/FrameworkTextAnimation` | Animated framework text | +| `LandingPageFWSelector` | `landing-pages/LandingPageFWSelector` | Framework selector | +| `Footer` | `footer/Footer` | Site footer | +| `SiteHeader` | `site-header/SiteHeader.astro` | Site header | + +## Utility Functions + +```typescript +// URL utilities +import { urlWithBaseUrl } from '@utils/urlWithBaseUrl'; +import { urlWithPrefix } from '@utils/urlWithPrefix'; + +// Add base URL to paths (base URL is product-specific) +urlWithBaseUrl('/example') // → '//example' (with configured base) + +// Add framework prefix +urlWithPrefix({ framework: 'react', url: './quick-start' }) // → '/react/quick-start' + +// Framework detection +import { getFrameworkFromPath } from '@components/docs/utils/urlPaths'; +const framework = getFrameworkFromPath(Astro.url.pathname); +``` + +## React Component Hydration + +When using React components in Astro pages, add hydration directives: + +| Directive | When to Use | +|-----------|-------------| +| `client:load` | Needs immediate interactivity (most common) | +| `client:idle` | Can wait until browser is idle | +| `client:visible` | Only when scrolled into view | +| (none) | Static content only, no JavaScript | + +```astro + + + + + + + + +``` + +## Path Aliases + +| Alias | Path | +|-------|------| +| `@components/*` | `src/components/*` | +| `@layouts/*` | `src/layouts/*` | +| `@pages-styles/*` | `src/pages-styles/*` | +| `@stores/*` | `src/stores/*` | +| `@utils/*` | `src/utils/*` | +| `@constants` | `src/constants.ts` | +| `@ag-website-shared/*` | `external/ag-website-shared/src/*` | + +## Code Style + +### Import Order + +1. Astro imports (`astro:content`) +2. External packages +3. Shared components (`@ag-website-shared/*`) +4. Local components/utils (`@components/*`, `@utils/*`) +5. Styles + +### Naming Conventions + +| Type | Convention | Example | +|------|------------|---------| +| Astro pages | kebab-case | `license-pricing.astro` | +| Components | PascalCase | `MyComponent.tsx` | +| Style modules | kebab-case | `my-page.module.scss` | +| CSS classes | camelCase | `.pageContainer` | + +## Common Tasks + +### Add a New Standard Page + +1. Create `src/pages/my-page.astro` +2. Import Layout and any needed components +3. Use standard design system classes where possible +4. Create `src/pages-styles/my-page.module.scss` only if custom styles needed + +### Use a Shared Component + +1. Check `external/ag-website-shared/src/components/` for available components +2. Import with `@ag-website-shared/components/...` +3. Add `client:load` if it's a React component needing interactivity diff --git a/external/ag-shared/prompts/guides/website-browser-testing.md b/external/ag-shared/prompts/guides/website-browser-testing.md new file mode 100644 index 00000000000..fcd239da0a8 --- /dev/null +++ b/external/ag-shared/prompts/guides/website-browser-testing.md @@ -0,0 +1,89 @@ +--- +targets: ['*'] +description: 'Chrome DevTools MCP browser testing workflow for AG product websites' +globs: + [ + '**/src/pages/**/*.astro', + '**/src/layouts/**/*.astro', + ] +--- + +# Website Browser Testing Guide + +Use Chrome DevTools MCP for browser testing and debugging AG product websites. + +## Available Tools + +| Tool | Purpose | +|------|---------| +| `mcp__chrome-devtools__browser_navigate` | Navigate to URL | +| `mcp__chrome-devtools__browser_snapshot` | Get accessibility tree snapshot | +| `mcp__chrome-devtools__browser_click` | Click elements | +| `mcp__chrome-devtools__browser_type` | Type text into elements | +| `mcp__chrome-devtools__browser_take_screenshot` | Capture screenshot | +| `mcp__chrome-devtools__browser_console_messages` | Get console messages with stack traces | +| `mcp__chrome-devtools__browser_network_requests` | Analyse network requests | +| `mcp__chrome-devtools__browser_evaluate` | Execute JavaScript | +| `mcp__chrome-devtools__browser_performance_record` | Record performance trace | +| `mcp__chrome-devtools__browser_emulate` | Emulate devices/network conditions | + +## Testing Workflow + +### 1. Start Development Server + +```bash +yarn nx dev +``` + +The dev server port is product-specific. Use `yarn nx dev` to start the server and check the output for the local URL. + +### 2. Navigate to Page + +``` +mcp__chrome-devtools__browser_navigate url="http://localhost:/my-page" +``` + +### 3. Check Console for Errors + +``` +mcp__chrome-devtools__browser_console_messages +``` + +### 4. Analyse Network Requests + +``` +mcp__chrome-devtools__browser_network_requests +``` + +### 5. Test Interactions + +``` +mcp__chrome-devtools__browser_click element="button with text 'Submit'" +mcp__chrome-devtools__browser_type element="email input" text="test@example.com" +``` + +### 6. Execute JavaScript + +``` +mcp__chrome-devtools__browser_evaluate expression="document.querySelector('h1').textContent" +``` + +## Testing Checklist + +- [ ] Page loads without console errors +- [ ] Content renders correctly +- [ ] Dark mode toggle works +- [ ] Responsive layout works +- [ ] Interactive components function +- [ ] Links navigate correctly +- [ ] Header/footer display properly +- [ ] No network errors + +## Device Emulation + +Test responsive layouts: + +``` +mcp__chrome-devtools__browser_emulate device="iPhone 14 Pro" +mcp__chrome-devtools__browser_emulate device="iPad" +``` diff --git a/external/ag-shared/prompts/guides/website-css.md b/external/ag-shared/prompts/guides/website-css.md new file mode 100644 index 00000000000..547b6db0ce1 --- /dev/null +++ b/external/ag-shared/prompts/guides/website-css.md @@ -0,0 +1,490 @@ +--- +targets: ['*'] +description: 'CSS architecture, design system, design tokens, utility classes, and styling patterns for AG product websites' +globs: + [ + '**/src/pages-styles/**/*.scss', + '**/src/pages-styles/**/*.css', + '**/src/components/**/*.scss', + 'external/ag-website-shared/src/design-system/**/*.scss', + ] +--- + +# Website CSS & Styling Guide + +This guide covers the CSS architecture, design system, design tokens, utility classes, and styling patterns used by AG product websites. + +## Design System + +### Location + +The website's design system is defined in a shared external package: + +``` +external/ag-website-shared/src/design-system/ +``` + +### File Structure + +| File | Purpose | +|------|---------| +| `_root.scss` | All CSS custom properties (colours, typography, layout, shadows) | +| `core/_variables.scss` | SCSS variables (spacing, selectors, transitions) | +| `core/_breakpoints.scss` | Responsive breakpoint values | +| `_layout.scss` | Layout utility classes | +| `_typography.scss` | Typography utility classes | +| `_color.scss` | Colour utility classes | +| `_interactions.scss` | Interactive state classes | +| `_base.scss` | Base element styles | + +### Using the Design System + +Always import the design system at the top of SCSS files: + +```scss +@use 'design-system' as *; +``` + +## CSS Custom Properties + +### How Variables Are Organised + +The design system uses CSS custom properties (variables) organised into semantic categories: + +```scss +:root { + // Abstract colours (raw palette) + --color-gray-50: #f9fafb; + --color-gray-100: #f2f4f7; + // ... through gray-950 + + --color-brand-50: #f4f8ff; + --color-brand-100: #e5effd; + // ... through brand-950 + + // Semantic colours (use these in components) + --color-bg-primary: var(--color-white); + --color-fg-primary: var(--color-gray-900); + --color-border-primary: var(--color-gray-300); +} +``` + +### Colour Palette Reference + +#### Grey Scale + +| Variable | Light Mode | Hex | +| ------------------ | ---------- | --------- | +| `--color-gray-25` | Lightest | `#fcfcfd` | +| `--color-gray-50` | | `#f9fafb` | +| `--color-gray-100` | | `#f2f4f7` | +| `--color-gray-200` | | `#eaecf0` | +| `--color-gray-300` | | `#d0d5dd` | +| `--color-gray-400` | | `#98a2b3` | +| `--color-gray-500` | | `#667085` | +| `--color-gray-600` | | `#475467` | +| `--color-gray-700` | | `#344054` | +| `--color-gray-800` | | `#182230` | +| `--color-gray-900` | | `#101828` | +| `--color-gray-950` | Darkest | `#0c111d` | + +#### Brand Colours (Blue) + +| Variable | Hex | +| ------------------- | --------- | +| `--color-brand-50` | `#f4f8ff` | +| `--color-brand-100` | `#e5effd` | +| `--color-brand-200` | `#d4e3f8` | +| `--color-brand-300` | `#a9c5ec` | +| `--color-brand-400` | `#3d7acd` | +| `--color-brand-500` | `#0e4491` | +| `--color-brand-600` | `#0042a1` | +| `--color-brand-700` | `#00388f` | +| `--color-brand-800` | `#002e7e` | +| `--color-brand-900` | `#00246c` | +| `--color-brand-950` | `#001a5a` | + +#### Warning Colours (Orange/Yellow) + +| Variable | Hex | +| --------------------- | --------- | +| `--color-warning-50` | `#fffaeb` | +| `--color-warning-100` | `#fef0c7` | +| `--color-warning-200` | `#fedf89` | +| `--color-warning-300` | `#fec84b` | +| `--color-warning-400` | `#fdb022` | +| `--color-warning-500` | `#f79009` | +| `--color-warning-600` | `#dc6803` | +| `--color-warning-700` | `#b54708` | +| `--color-warning-800` | `#93370d` | +| `--color-warning-900` | `#7a2e0e` | +| `--color-warning-950` | `#4e1d09` | + +#### Special Colours + +| Variable | Hex | Usage | +| ------------------ | ------------------------------------ | --------------------- | +| `--color-success` | `#28a745` (light) / `#64ea82` (dark) | Success states | +| `--color-positive` | `#28a745` | Positive indicators | +| `--color-negative` | `#dc3545` | Error/negative states | + +## Standard CSS Utility Classes + +**Always prefer using standard design system classes over custom styles.** These classes are globally available and ensure consistency across the site. + +### Layout Classes + +Use these for page structure and grid layouts: + +| Class | Purpose | +|-------|---------| +| `.layout-grid` | Flexbox grid container with standard gap and max-width | +| `.layout-page-max-width` | Full width constrained to max page width | +| `.layout-max-width-small` | Narrower content width with horizontal padding | + +**Column Classes (4-column grid):** +- `.column-1-4`, `.column-2-4`, `.column-3-4`, `.column-4-4` + +**Column Classes (6-column grid):** +- `.column-1-6` through `.column-6-6` + +**Column Classes (12-column grid):** +- `.column-1-12` through `.column-12-12` + +```astro +
+
Main content
+
Sidebar
+
+``` + +### Typography Classes + +| Class | Font Size | Use For | +|-------|-----------|---------| +| `.text-2xs` | 10px | Fine print | +| `.text-xs` | 12px | Captions, labels | +| `.text-sm` | 14px | Secondary text | +| `.text-base` | 16px | Body text (default) | +| `.text-lg` | 20px | Subheadings | +| `.text-xl` | 24px | Section headings | +| `.text-2xl` | 32px | Page headings | +| `.text-3xl` | 40px | Hero headings | + +**Weight Classes:** +- `.text-regular` (400) +- `.text-semibold` (600) +- `.text-bold` (700) + +**Other:** +- `.text-monospace` - Monospace font family + +```astro +

Page Title

+

Body content here.

+code example +``` + +### Colour Classes + +| Class | Purpose | +|-------|---------| +| `.text-secondary` | Secondary foreground colour | +| `.text-tertiary` | Tertiary foreground colour | + +### Interaction Classes + +| Class | Purpose | +|-------|---------| +| `.collapse` | Hidden when not `.show` | +| `.collapsing` | Animating collapse transition | +| `.no-transitions` | Disable all transitions | +| `.no-overflow-anchor` | Prevent scroll anchoring | + +### Example: Using Standard Classes + +```astro +
+

Welcome

+

+ Introduction paragraph with secondary styling. +

+ +
+
+

Left Column

+
+
+

Right Column

+
+
+
+``` + +## Dark Mode + +### How Dark Mode Works + +Dark mode is triggered by the `data-dark-mode="true"` attribute on the `` element: + +```scss +html[data-dark-mode='true'] { + --color-bg-primary: color-mix(in srgb, var(--color-gray-800), var(--color-gray-900) 50%); + --color-fg-primary: var(--color-white); + // ... other overrides +} +``` + +### Dark Mode in SCSS Modules + +Use the `$selector-darkmode` SCSS variable for dark mode overrides in component styles: + +```scss +.myElement { + background-color: var(--color-bg-primary); + + #{$selector-darkmode} & { + background-color: var(--color-bg-secondary); + } +} +``` + +### Key Dark Mode Colours + +| Semantic Variable | Light Mode | Dark Mode | +| -------------------------- | ---------- | ---------------------------------------- | +| `--color-bg-primary` | `#ffffff` | Mix of `#182230` + `#101828` | +| `--color-bg-secondary` | `#f9fafb` | `#344054` | +| `--color-bg-tertiary` | `#f2f4f7` | `#182230` | +| `--color-fg-primary` | `#101828` | `#ffffff` | +| `--color-fg-secondary` | `#344054` | `#d0d5dd` | +| `--color-border-primary` | `#d0d5dd` | `#344054` | +| `--color-border-secondary` | `#eaecf0` | Mix of `#344054` + bg-primary | +| `--color-link` | `#0e4491` | `#a9c5ec` | + +### Detecting Dark Mode in JavaScript + +```typescript +// Check data attribute (preferred) +const isDark = document.documentElement.getAttribute('data-dark-mode') === 'true'; + +// Or check for dark mode class (fallback) +const isDark = document.documentElement.classList.contains('dark'); +``` + +### Creating Theme-Aware Components + +Use CSS custom properties that react to `data-dark-mode`: + +```css +/* Define variables for both modes */ +:root { + --my-component-bg: #ffffff; + --my-component-text: #101828; +} + +[data-dark-mode='true'] { + --my-component-bg: #182230; + --my-component-text: #d0d5dd; +} + +/* Use variables in component */ +.my-component { + background: var(--my-component-bg); + color: var(--my-component-text); +} +``` + +This approach ensures instant theme switching without JavaScript re-rendering. + +## Semantic Colour Categories + +### Background Colours (`--color-bg-*`) + +- `--color-bg-primary`: Main content background +- `--color-bg-secondary`: Secondary/elevated surfaces +- `--color-bg-tertiary`: Subtle backgrounds +- `--color-bg-toolbar`: Toolbar backgrounds +- `--color-bg-code`: Code block backgrounds + +### Foreground/Text Colours (`--color-fg-*`) + +- `--color-fg-primary`: Primary text +- `--color-fg-secondary`: Secondary/muted text +- `--color-fg-tertiary`: Subtle text +- `--color-fg-disabled`: Disabled state text + +### Border Colours (`--color-border-*`) + +- `--color-border-primary`: Primary borders +- `--color-border-secondary`: Subtle borders +- `--color-border-tertiary`: Very subtle borders + +### Link Colours (`--color-link*`) + +- `--color-link`: Default link colour +- `--color-link-hover`: Link hover state + +## Design Tokens Reference + +### Spacing (SCSS variables from `core/_variables.scss`) + +| Variable | Value | +|----------|-------| +| `$spacing-size-1` | 4px | +| `$spacing-size-2` | 8px | +| `$spacing-size-3` | 12px | +| `$spacing-size-4` | 16px | +| `$spacing-size-5` | 20px | +| `$spacing-size-6` | 24px | +| `$spacing-size-8` | 32px | +| `$spacing-size-10` | 40px | +| `$spacing-size-12` | 48px | +| `$spacing-size-16` | 64px | +| `$spacing-size-20` | 80px | +| `$spacing-size-24` | 96px | + +### Breakpoints (SCSS variables from `core/_breakpoints.scss`) + +| Variable | Value | Use For | +|----------|-------|---------| +| `$breakpoint-hero-small` | 620px | Small hero layouts | +| `$breakpoint-hero-large` | 1020px | Large hero layouts | +| `$breakpoint-landing-page-medium` | 1020px | Landing pages | +| `$breakpoint-docs-nav-medium` | 1052px | Docs navigation | +| `$breakpoint-pricing-small` | 620px | Pricing page | +| `$breakpoint-pricing-medium` | 820px | Pricing page | +| `$breakpoint-pricing-large` | 1260px | Pricing page | + +### Typography (CSS variables from `_root.scss`) + +| Variable | Value | +|----------|-------| +| `--text-fs-2xs` | 10px | +| `--text-fs-xs` | 12px | +| `--text-fs-sm` | 14px | +| `--text-fs-base` | 16px | +| `--text-fs-lg` | 20px | +| `--text-fs-xl` | 24px | +| `--text-fs-2xl` | 32px | +| `--text-fs-3xl` | 40px | +| `--text-lh-tight` | 1.2 | +| `--text-regular` | 400 | +| `--text-semibold` | 600 | +| `--text-bold` | 700 | + +### Layout (CSS variables from `_root.scss`) + +| Variable | Description | +|----------|-------------| +| `--layout-gap` | Grid gap (32px) | +| `--layout-max-width` | Max page width (1800px) | +| `--layout-max-width-small` | Narrow content width (1240px) | +| `--layout-horizontal-margins` | Side margins | + +### Border Radius + +- `--radius-xs` (4px), `--radius-sm` (6px), `--radius-md` (8px), `--radius-lg` (10px), `--radius-xl` (12px), `--radius-2xl` (16px) + +### Shadows + +- `--shadow-xs`, `--shadow-sm`, `--shadow-md`, `--shadow-lg`, `--shadow-xl`, `--shadow-2xl` + +## Creating Custom Page Styles + +Only create custom styles when standard classes don't meet your needs. Create SCSS modules in `packages//src/pages-styles/`: + +```scss +// my-page.module.scss +@use 'design-system' as *; + +.heroSection { + padding-top: $spacing-size-16; + background-color: var(--color-bg-site-header); + + // Dark mode support + #{$selector-darkmode} & { + background-color: var(--color-bg-secondary); + } + + // Responsive + @media screen and (min-width: $breakpoint-hero-large) { + padding-top: $spacing-size-24; + } +} +``` + +## Using Styles in Astro + +```astro +--- +import styles from '@pages-styles/my-page.module.scss'; +import classnames from 'classnames'; +--- + +
+
+

Title

+
+
+``` + +## Component Styling Patterns + +### Using SCSS Modules + +The website uses CSS/SCSS modules for component styling: + +```scss +// MyComponent.module.scss +.container { + background: var(--color-bg-primary); + border: 1px solid var(--color-border-primary); + color: var(--color-fg-primary); +} +``` + +## Best Practices + +### DO: + +- Use semantic variables (`--color-bg-primary`) not raw colours (`--color-gray-50`) +- Define component-specific variables that reference design system variables +- Use `[data-dark-mode="true"]` selector or `#{$selector-darkmode}` for dark mode overrides +- Test components in both light and dark modes +- Prefer standard design system utility classes over custom styles + +### DON'T: + +- Hardcode hex colours directly in components +- Use `prefers-color-scheme` media query (the site uses explicit `data-dark-mode`) +- Assume light mode is the default without testing dark mode + +## Adding New Theme-Aware Styles + +When creating new components or features that need to support both themes: + +1. **Define CSS variables** in a `