Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions docs/README.skills.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,15 +45,15 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to
| [arch-linux-triage](../skills/arch-linux-triage/SKILL.md)<br />`gh skills install github/awesome-copilot arch-linux-triage` | Triage and resolve Arch Linux issues with pacman, systemd, and rolling-release best practices. | None |
| [architecture-blueprint-generator](../skills/architecture-blueprint-generator/SKILL.md)<br />`gh skills install github/awesome-copilot architecture-blueprint-generator` | Comprehensive project architecture blueprint generator that analyzes codebases to create detailed architectural documentation. Automatically detects technology stacks and architectural patterns, generates visual diagrams, documents implementation patterns, and provides extensible blueprints for maintaining architectural consistency and guiding new development. | None |
| [arduino-azure-iot-edge-integration](../skills/arduino-azure-iot-edge-integration/SKILL.md)<br />`gh skills install github/awesome-copilot arduino-azure-iot-edge-integration` | Design and implement Arduino integration with Azure IoT Hub and IoT Edge, including secure provisioning, resilient telemetry, command handling, and production guardrails. | `references/arduino-iot-checklist.md`<br />`references/arduino-official-best-practices.md` |
| [arize-ai-provider-integration](../skills/arize-ai-provider-integration/SKILL.md)<br />`gh skills install github/awesome-copilot arize-ai-provider-integration` | INVOKE THIS SKILL when creating, reading, updating, or deleting Arize AI integrations. Covers listing integrations, creating integrations for any supported LLM provider (OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Vertex AI, Gemini, NVIDIA NIM, custom), updating credentials or metadata, and deleting integrations using the ax CLI. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-annotation](../skills/arize-annotation/SKILL.md)<br />`gh skills install github/awesome-copilot arize-annotation` | INVOKE THIS SKILL when creating, managing, or using annotation configs or annotation queues on Arize (categorical, continuous, freeform), or applying human annotations to project spans via the Python SDK. Configs are the label schema for human feedback; queues are review workflows that route records to annotators. Triggers: annotation config, annotation queue, label schema, human feedback schema, bulk annotate spans, update_annotations, labeling queue, annotate record. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-dataset](../skills/arize-dataset/SKILL.md)<br />`gh skills install github/awesome-copilot arize-dataset` | INVOKE THIS SKILL when creating, managing, or querying Arize datasets and examples. Also use when the user needs test data or evaluation examples for their model. Covers dataset CRUD, appending examples, exporting data, and file-based dataset creation using the ax CLI. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-evaluator](../skills/arize-evaluator/SKILL.md)<br />`gh skills install github/awesome-copilot arize-evaluator` | INVOKE THIS SKILL for LLM-as-judge evaluation workflows on Arize: creating/updating evaluators, running evaluations on spans or experiments, tasks, trigger-run, column mapping, and continuous monitoring. Use when the user says: create an evaluator, LLM judge, hallucination/faithfulness/correctness/relevance, run eval, score my spans or experiment, ax tasks, trigger-run, trigger eval, column mapping, continuous monitoring, query filter for evals, evaluator version, or improve an evaluator prompt. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-experiment](../skills/arize-experiment/SKILL.md)<br />`gh skills install github/awesome-copilot arize-experiment` | INVOKE THIS SKILL when creating, running, or analyzing Arize experiments. Also use when the user wants to evaluate or measure model performance, compare models (including GPT-4, Claude, or others), or assess how well their AI is doing. Covers experiment CRUD, exporting runs, comparing results, and evaluation workflows using the ax CLI. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-instrumentation](../skills/arize-instrumentation/SKILL.md)<br />`gh skills install github/awesome-copilot arize-instrumentation` | INVOKE THIS SKILL when adding Arize AX tracing or observability to an app for the first time, or when the user wants to instrument their LLM app or get started with LLM observability. Follow the Agent-Assisted Tracing two-phase flow: analyze the codebase (read-only), then implement after user confirmation. When the app uses LLM tool/function calling, add manual CHAIN + TOOL spans. Leverages https://arize.com/docs/ax/alyx/tracing-assistant and https://arize.com/docs/PROMPT.md. | `references/ax-profiles.md` |
| [arize-link](../skills/arize-link/SKILL.md)<br />`gh skills install github/awesome-copilot arize-link` | Generate deep links to the Arize UI. Use when the user wants a clickable URL to open or share a specific trace, span, session, dataset, labeling queue, evaluator, or annotation config, or when sharing Arize resources with team members. | `references/EXAMPLES.md` |
| [arize-prompt-optimization](../skills/arize-prompt-optimization/SKILL.md)<br />`gh skills install github/awesome-copilot arize-prompt-optimization` | INVOKE THIS SKILL when optimizing, improving, or debugging LLM prompts using production trace data, evaluations, and annotations. Also use when the user wants to make their AI respond better or improve AI output quality. Covers extracting prompts from spans, gathering performance signal, and running a data-driven optimization loop using the ax CLI. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-trace](../skills/arize-trace/SKILL.md)<br />`gh skills install github/awesome-copilot arize-trace` | INVOKE THIS SKILL when downloading, exporting, or inspecting Arize traces and spans, or when a user wants to look at what their LLM app is doing using existing trace data, or when an already-instrumented app has a bug or error to investigate. Use for debugging unknown runtime issues, failures, and behavior regressions. Covers exporting traces by ID, spans by ID, sessions by ID, and root-cause investigation with the ax CLI. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-ai-provider-integration](../skills/arize-ai-provider-integration/SKILL.md)<br />`gh skills install github/awesome-copilot arize-ai-provider-integration` | Creates, reads, updates, and deletes Arize AI integrations that store LLM provider credentials used by evaluators and other Arize features. Supports any LLM provider (e.g. OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Vertex AI, Gemini, NVIDIA NIM). Use when the user mentions AI integration, LLM provider credentials, create integration, list integrations, update credentials, delete integration, or connecting an LLM provider to Arize. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-annotation](../skills/arize-annotation/SKILL.md)<br />`gh skills install github/awesome-copilot arize-annotation` | Creates and manages annotation configs (categorical, continuous, freeform label schemas) and annotation queues (human review workflows) on Arize. Applies human annotations to project spans via the Python SDK. Use when the user mentions annotation config, annotation queue, label schema, human feedback, bulk annotate spans, update_annotations, labeling queue, annotate record, or human review. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-dataset](../skills/arize-dataset/SKILL.md)<br />`gh skills install github/awesome-copilot arize-dataset` | Creates, manages, and queries Arize datasets and examples. Covers dataset CRUD, appending examples, exporting data, and file-based dataset creation using the ax CLI. Use when the user needs test data, evaluation examples, or mentions create dataset, list datasets, export dataset, append examples, dataset version, golden dataset, or test set. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-evaluator](../skills/arize-evaluator/SKILL.md)<br />`gh skills install github/awesome-copilot arize-evaluator` | Handles LLM-as-judge evaluation workflows on Arize including creating/updating evaluators, running evaluations on spans or experiments, managing tasks, trigger-run operations, column mapping, and continuous monitoring. Use when the user mentions create evaluator, LLM judge, hallucination, faithfulness, correctness, relevance, run eval, score spans, score experiment, trigger-run, column mapping, continuous monitoring, or improve evaluator prompt. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-experiment](../skills/arize-experiment/SKILL.md)<br />`gh skills install github/awesome-copilot arize-experiment` | Creates, runs, and analyzes Arize experiments for evaluating and comparing model performance. Covers experiment CRUD, exporting runs, comparing results, and evaluation workflows using the ax CLI. Use when the user mentions create experiment, run experiment, compare models, model performance, evaluate AI, experiment results, benchmark, A/B test models, or measure accuracy. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-instrumentation](../skills/arize-instrumentation/SKILL.md)<br />`gh skills install github/awesome-copilot arize-instrumentation` | Adds Arize AX tracing to an LLM application for the first time. Follows a two-phase agent-assisted flow to analyze the codebase then implement instrumentation after user confirmation. Use when the user wants to instrument their app, add tracing from scratch, set up LLM observability, integrate OpenTelemetry or openinference, or get started with Arize tracing. | `references/ax-profiles.md` |
| [arize-link](../skills/arize-link/SKILL.md)<br />`gh skills install github/awesome-copilot arize-link` | Generates deep links to the Arize UI for traces, spans, sessions, datasets, labeling queues, evaluators, and annotation configs. Produces clickable URLs for sharing Arize resources with team members. Use when the user wants to link to or open a trace, span, session, dataset, evaluator, or annotation config in the Arize UI. | `references/EXAMPLES.md` |
| [arize-prompt-optimization](../skills/arize-prompt-optimization/SKILL.md)<br />`gh skills install github/awesome-copilot arize-prompt-optimization` | Optimizes, improves, and debugs LLM prompts using production trace data, evaluations, and annotations. Extracts prompts from spans, gathers performance signal, and runs a data-driven optimization loop using the ax CLI. Use when the user mentions optimize prompt, improve prompt, make AI respond better, improve output quality, prompt engineering, prompt tuning, or system prompt improvement. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [arize-trace](../skills/arize-trace/SKILL.md)<br />`gh skills install github/awesome-copilot arize-trace` | Downloads, exports, and inspects existing Arize traces and spans to understand what an LLM app is doing or debug runtime issues. Covers exporting traces by ID, spans by ID, sessions by ID, and root-cause investigation using the ax CLI. Use when the user wants to look at existing trace data, see what their LLM app is doing, export traces, download spans, investigate errors, or analyze behavior regressions. | `references/ax-profiles.md`<br />`references/ax-setup.md` |
| [aspire](../skills/aspire/SKILL.md)<br />`gh skills install github/awesome-copilot aspire` | Aspire skill covering the Aspire CLI, AppHost orchestration, service discovery, integrations, MCP server, VS Code extension, Dev Containers, GitHub Codespaces, templates, dashboard, and deployment. Use when the user asks to create, run, debug, configure, deploy, or troubleshoot an Aspire distributed application. | `references/architecture.md`<br />`references/cli-reference.md`<br />`references/dashboard.md`<br />`references/deployment.md`<br />`references/integrations-catalog.md`<br />`references/mcp-server.md`<br />`references/polyglot-apis.md`<br />`references/testing.md`<br />`references/troubleshooting.md` |
| [aspnet-minimal-api-openapi](../skills/aspnet-minimal-api-openapi/SKILL.md)<br />`gh skills install github/awesome-copilot aspnet-minimal-api-openapi` | Create ASP.NET Minimal API endpoints with proper OpenAPI documentation | None |
| [audit-integrity](../skills/audit-integrity/SKILL.md)<br />`gh skills install github/awesome-copilot audit-integrity` | Shared audit integrity framework for all AppSec agents — enforces output quality, intellectual honesty, and continuous improvement through anti-rationalization guards, self-critique loops, retry protocols, non-negotiable behaviors, self-reflection quality gates (1-10 scoring, ≥8 threshold), and a self-learning system with lesson/memory governance for security analysis agents. | `references/anti-rationalization-guard.md`<br />`references/clarification-protocol.md`<br />`references/non-negotiable-behaviors.md`<br />`references/retry-protocol.md`<br />`references/self-critique-loop.md`<br />`references/self-learning-system.md`<br />`references/self-reflection-quality-gate.md` |
Expand Down
6 changes: 5 additions & 1 deletion skills/arize-ai-provider-integration/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
---
name: arize-ai-provider-integration
description: "INVOKE THIS SKILL when creating, reading, updating, or deleting Arize AI integrations. Covers listing integrations, creating integrations for any supported LLM provider (OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Vertex AI, Gemini, NVIDIA NIM, custom), updating credentials or metadata, and deleting integrations using the ax CLI."
description: Creates, reads, updates, and deletes Arize AI integrations that store LLM provider credentials used by evaluators and other Arize features. Supports any LLM provider (e.g. OpenAI, Anthropic, Azure OpenAI, AWS Bedrock, Vertex AI, Gemini, NVIDIA NIM). Use when the user mentions AI integration, LLM provider credentials, create integration, list integrations, update credentials, delete integration, or connecting an LLM provider to Arize.
metadata:
author: arize
version: "1.0"
compatibility: Requires the ax CLI and a configured Arize profile.
---

# Arize AI Integration Skill
Expand Down
6 changes: 5 additions & 1 deletion skills/arize-annotation/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
---
name: arize-annotation
description: "INVOKE THIS SKILL when creating, managing, or using annotation configs or annotation queues on Arize (categorical, continuous, freeform), or applying human annotations to project spans via the Python SDK. Configs are the label schema for human feedback; queues are review workflows that route records to annotators. Triggers: annotation config, annotation queue, label schema, human feedback schema, bulk annotate spans, update_annotations, labeling queue, annotate record."
description: Creates and manages annotation configs (categorical, continuous, freeform label schemas) and annotation queues (human review workflows) on Arize. Applies human annotations to project spans via the Python SDK. Use when the user mentions annotation config, annotation queue, label schema, human feedback, bulk annotate spans, update_annotations, labeling queue, annotate record, or human review.
metadata:
author: arize
version: "1.0"
compatibility: Requires the ax CLI and a configured Arize profile.
---

# Arize Annotation Skill
Expand Down
6 changes: 5 additions & 1 deletion skills/arize-dataset/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
---
name: arize-dataset
description: "INVOKE THIS SKILL when creating, managing, or querying Arize datasets and examples. Also use when the user needs test data or evaluation examples for their model. Covers dataset CRUD, appending examples, exporting data, and file-based dataset creation using the ax CLI."
description: Creates, manages, and queries Arize datasets and examples. Covers dataset CRUD, appending examples, exporting data, and file-based dataset creation using the ax CLI. Use when the user needs test data, evaluation examples, or mentions create dataset, list datasets, export dataset, append examples, dataset version, golden dataset, or test set.
metadata:
author: arize
version: "1.0"
compatibility: Requires the ax CLI and a configured Arize profile.
---

# Arize Dataset Skill
Expand Down
6 changes: 5 additions & 1 deletion skills/arize-evaluator/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
---
name: arize-evaluator
description: "INVOKE THIS SKILL for LLM-as-judge evaluation workflows on Arize: creating/updating evaluators, running evaluations on spans or experiments, tasks, trigger-run, column mapping, and continuous monitoring. Use when the user says: create an evaluator, LLM judge, hallucination/faithfulness/correctness/relevance, run eval, score my spans or experiment, ax tasks, trigger-run, trigger eval, column mapping, continuous monitoring, query filter for evals, evaluator version, or improve an evaluator prompt."
description: Handles LLM-as-judge evaluation workflows on Arize including creating/updating evaluators, running evaluations on spans or experiments, managing tasks, trigger-run operations, column mapping, and continuous monitoring. Use when the user mentions create evaluator, LLM judge, hallucination, faithfulness, correctness, relevance, run eval, score spans, score experiment, trigger-run, column mapping, continuous monitoring, or improve evaluator prompt.
metadata:
author: arize
version: "1.0"
compatibility: Requires the ax CLI and a configured Arize profile with an AI integration.
---

# Arize Evaluator Skill
Expand Down
6 changes: 5 additions & 1 deletion skills/arize-experiment/SKILL.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
---
name: arize-experiment
description: "INVOKE THIS SKILL when creating, running, or analyzing Arize experiments. Also use when the user wants to evaluate or measure model performance, compare models (including GPT-4, Claude, or others), or assess how well their AI is doing. Covers experiment CRUD, exporting runs, comparing results, and evaluation workflows using the ax CLI."
description: Creates, runs, and analyzes Arize experiments for evaluating and comparing model performance. Covers experiment CRUD, exporting runs, comparing results, and evaluation workflows using the ax CLI. Use when the user mentions create experiment, run experiment, compare models, model performance, evaluate AI, experiment results, benchmark, A/B test models, or measure accuracy.
metadata:
author: arize
version: "1.0"
compatibility: Requires the ax CLI and a configured Arize profile.
---

# Arize Experiment Skill
Expand Down
Loading
Loading