Issue Verification Command
Verify that a GitHub issue was implemented correctly — catch missing requirements, edge cases, and drift before code review.
TL;DR: A reusable Cursor/Claude Code command that pulls a GitHub issue, extracts requirements, and checks them one by one against your codebase. Outputs a structured report: done, missing, risky.
Why This Exists
When AI writes code from a spec, it often looks complete but isn't. The model implements 80% of the requirements and moves on. Tests pass, the code runs — but subtle things are missing. This command forces a systematic check before you (or another model) even starts reviewing.
It's designed to be the first step in a cross-model review workflow.
The Command
Create this as a custom command in Cursor (.cursor/commands/check-issue.md) or as a prompt file for Claude Code.
# Check Issue Execution
## Objective
Verify that the requirements from a GitHub Issue have been fully implemented.
Identify gaps and risks.
## Input
Issue number (e.g. `123`)
## Step 1: Fetch the Issue
```bash
gh issue view <ISSUE_NUMBER> --json number,title,body,labels,state,assignees,comments --jq '.'
```
Save the full issue body for analysis.
## Step 2: Extract Requirements
List requirements in 3 categories (where applicable):
1. **Functional** — what should it do
2. **Technical** — how should it be built
3. **UI/UX** — how should it look and behave
For each requirement, define 1–3 verification criteria.
## Step 3: Verify Against Code
For each requirement:
1. **Find the implementation** — use codebase search
2. **Check completeness** — does it match exactly what the issue describes
3. **Flag gaps** — what's missing, unclear, or only partially done
## Step 4: Quality Checklist
Check only what applies to this issue:
- Architecture consistency with `CLAUDE.md`
- Type safety and input validation
- Error handling and edge cases
- Authorization (if user/org data is involved)
- UI/UX and accessibility (if UI is involved)
## Step 5: Run Tooling
Only if relevant:
```bash
npm run lint
npm run typecheck
```
## Step 6: Output Report
Keep it short and concrete:
- ✅ **Implemented** — requirements that are fully done
- ❌ **Missing** — requirements not implemented or incomplete
- ⚠️ **Risks** — things that work but might cause problems
- ✨ **Recommendations** — optional improvementsUsage
In Cursor
- Save the command as
.cursor/commands/check-issue.md - Open the command palette and run it
- Set the model to your preferred reviewer (we use Codex 5.3 for this — see Cross-Model Review)
- Pass the issue number when prompted
Check Issue Execution 42In Claude Code
Paste the command as a prompt, or save it as a custom slash command:
/check-issue 42What It Catches
| Category | Example |
|---|---|
| Missing requirements | Issue says "add pagination" — implementation only renders first page |
| Partial implementations | Error handling exists but only covers happy path |
| Spec drift | Model added a caching layer that wasn't in the spec |
| Type safety gaps | Input accepts string but issue implies it should be validated |
| Auth oversights | Endpoint returns data without checking user permissions |
Tips
- For large issues, tell the model to focus on the top 5 requirements rather than checking everything
- Combine with lint/typecheck — the automated tools catch formatting and type issues, the command catches logic and completeness issues
- Use the output as input — the report from this command is the perfect thing to feed into a second model for cross-model review
Related
- Cross-Model Code Review — use this command's output as the first step in a multi-model review pipeline