Roadmap β
This is the public roadmap for CAP Pro. It is intentionally short β we ship features that need to exist, not features that might be cool. If you'd like to influence what's next, open an issue.
Status legend: π’ shipped Β· π‘ in progress Β· π΅ planned Β· βͺ exploring Β· β explicitly not on the roadmap
Now (CAP Pro 1.0 β May 2026) β
π’ Hard rebrand from code-as-plan to cap-pro, version reset to 1.0.0 π’ Sharded Feature Map (features/<ID>.md) β 10β50Γ token reduction π’ V6 per-feature memory layout π’ 9-agent topology (5 per-feature + 4 project-wide) π’ Multi-user handoff snapshots (forward + reverse, with structured briefings) π’ Auto-trigger contract for slash commands π’ Frontend Sprint Pattern (Phase-1 / Phase-2 auto-detection) π’ 8 runtimes from one install π’ VitePress documentation site (this site) π’ GitHub Pages deploy on push π’ Auto-publish on package.json version bump
Next (CAP Pro 1.1 β target Q3 2026) β
π‘ Telemetry & feedback β opt-in usage data collection (see Data Collection below) π΅ Custom-agent SDK β drop a JSON manifest into .cap/agents/ and CAP Pro discovers and routes to your custom agent π΅ /cap:bench β a benchmark command that measures CAP Pro's token savings vs running the same workflow without sharded layout / V6 memory / focused agents π΅ Plugin marketplace integration for OpenCode, Gemini CLI (currently only Claude Code) π΅ Native VS Code extension β show Feature Map state, hotspot highlights, and AC checklist inline in the editor
Later (CAP Pro 1.2 and beyond) β
βͺ Distributed multi-user mode β currently activeUser is per-machine; explore a sync layer for genuinely concurrent multi-user sessions βͺ Auto-detect the right ignore globs for monorepos with .gitignore + package.json:workspaces introspection βͺ cap-architect-driven module split proposals that produce diffs (not just suggestions) β gated behind explicit user approval βͺ Feature Map import / export in OpenAPI / JSON Schema for downstream tooling βͺ Agent observability β JSON-streamed agent logs with structured trace IDs, integrated into the OpenTelemetry ecosystem βͺ Cross-project memory β patterns extracted from many projects, surfaced as portfolio-wide pitfalls (opt-in, anonymous)
Explicitly NOT on the roadmap β
β A CAP Pro web UI / dashboard. The point is to stay close to the code. A web UI duplicates effort and drifts from the truth. β A CAP Pro hosted SaaS. Local-first, file-first. Your project's state is in your repo. End of story. β A CAP Pro proprietary LLM. We are runtime-agnostic on purpose. β Closed-source CAP Pro Premium tier. MIT license, full stop. β Replacing your IDE. CAP Pro is a workflow framework, not an editor.
Influencing the roadmap β
We weigh four signals:
- GitHub issues with concrete reproduction. A bug with a 5-line repro is worth 10 vague feature requests.
- Telemetry signals (once we ship telemetry β see below). If 80% of users never use a command, we'll deprecate it.
- Real-project usage by the maintainers. CAP Pro is dogfooded on projects we ship; pain we hit becomes priorities.
- Aligned philosophy. Features that violate the Code-First principle (e.g. "let's add a separate REQUIREMENTS.md format") get rejected on principle.
Data Collection Roadmap β
To prioritise the roadmap honestly, we want opt-in usage telemetry. Here's the brainstorm of what we'd want to collect β none of it is shipped yet, all of it is opt-in only.
What we'd want to know β
| Data point | Why we want it | Sensitivity |
|---|---|---|
| Which slash commands you run (count by command, not args) | Identify dead commands; deprecate those nobody uses | Low |
| Which agents you spawn (count, mode, time-to-complete) | Detect agents that are slow / hang / fail | Low |
| Feature Map size (number of features, AC count distribution) | Decide when to make sharded layout the default | Low |
| Memory size (number of decisions / pitfalls / patterns) | Decide when to make V6 the default | Low |
| Auto-trigger hit/miss rate | Improve the auto-trigger contract | Low |
| Runtime mix (Claude Code vs Gemini vs Cursor vs β¦) | Prioritise which runtimes deserve more polish | Low |
| Crash / error rate per command | Find and fix broken paths | Low |
/cap:debug hypothesis count and resolution rate | Improve the debugger's prompts | Medium |
| Time spent in Phase-1 vs Phase-2 of frontend sprints | Tune the auto-detection thresholds | Medium |
What we will NEVER collect β
- File contents, code snippets, commit messages, branch names, file paths
- Feature Map content (titles, descriptions, ACs)
- Tag content
- Memory file content
- Session conversation transcripts
- Project name, repo URL, or any identifier that links data to a specific project
- User identifiers, emails, IPs (we use random opaque session IDs that rotate)
How we'd ship it β
- Opt-in only. A new
/cap:telemetry oncommand. Default: off. The first run after upgrade asks once, never again. - Local file you can inspect.
.cap/telemetry-pending.jsonlβ every event is written here first. You cancatit, see exactly what's about to be sent. - Batched + forgettable. Sent once a week if online; if offline, deleted after 30 days unsent.
/cap:telemetry inspectβ show last 100 events./cap:telemetry offβ turn it off, deletes pending file.- Open-source backend. The aggregation server is in the same repo, so you can audit it.
If you want to weigh in on the telemetry design before it ships, please open an issue β we'd rather get this right than ship it fast.