feat: codebuddy-mem v13.0.0 - 基于 claude-mem 12.6.0 AGPL-3.0 分叉
- 全局重命名 claude-mem → codebuddy-mem - AI 后端改为 DeepSeek V4 直连 - 适配 CodeBuddy Code 作为 MCP 客户端 - 修复 GS 函数 timeoutMs bug - 新增 README / CHANGELOG / UPSTREAM / install.sh - 协议:AGPL-3.0
This commit is contained in:
45
skills/do/SKILL.md
Normal file
45
skills/do/SKILL.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
name: do
|
||||
description: Execute a phased implementation plan using subagents. Use when asked to execute, run, or carry out a plan — especially one created by make-plan.
|
||||
---
|
||||
|
||||
# Do Plan
|
||||
|
||||
You are an ORCHESTRATOR. Deploy subagents to execute *all* work. Do not do the work yourself except to coordinate, route context, and verify that each subagent completed its assigned checklist.
|
||||
|
||||
## Execution Protocol
|
||||
|
||||
### Rules
|
||||
|
||||
- Each phase uses fresh subagents where noted (or when context is large/unclear)
|
||||
- Assign one clear objective per subagent and require evidence (commands run, outputs, files changed)
|
||||
- Do not advance to the next step until the assigned subagent reports completion and the orchestrator confirms it matches the plan
|
||||
|
||||
### During Each Phase
|
||||
|
||||
Deploy an "Implementation" subagent to:
|
||||
1. Execute the implementation as specified
|
||||
2. COPY patterns from documentation, don't invent
|
||||
3. Cite documentation sources in code comments when using unfamiliar APIs
|
||||
4. If an API seems missing, STOP and verify — don't assume it exists
|
||||
|
||||
### After Each Phase
|
||||
|
||||
Deploy subagents for each post-phase responsibility:
|
||||
1. **Run verification checklist** — Deploy a "Verification" subagent to prove the phase worked
|
||||
2. **Anti-pattern check** — Deploy an "Anti-pattern" subagent to grep for known bad patterns from the plan
|
||||
3. **Code quality review** — Deploy a "Code Quality" subagent to review changes
|
||||
4. **Commit only if verified** — Deploy a "Commit" subagent *only after* verification passes; otherwise, do not commit
|
||||
|
||||
### Between Phases
|
||||
|
||||
Deploy a "Branch/Sync" subagent to:
|
||||
- Push to working branch after each verified phase
|
||||
- Prepare the next phase handoff so the next phase's subagents start fresh but have plan context
|
||||
|
||||
## Failure Modes to Prevent
|
||||
|
||||
- Don't invent APIs that "should" exist — verify against docs
|
||||
- Don't add undocumented parameters — copy exact signatures
|
||||
- Don't skip verification — deploy a verification subagent and run the checklist
|
||||
- Don't commit before verification passes (or without explicit orchestrator approval)
|
||||
22
skills/how-it-works/SKILL.md
Normal file
22
skills/how-it-works/SKILL.md
Normal file
@@ -0,0 +1,22 @@
|
||||
---
|
||||
name: how-it-works
|
||||
description: Explain how claude-mem captures observations, when memory injection kicks in, and where data lives. Use when the user asks "how does claude-mem work?" or "what is this thing doing?".
|
||||
---
|
||||
|
||||
# How claude-mem works
|
||||
|
||||
## What it does
|
||||
|
||||
Every Read, Edit, and Bash that Claude makes turns into a compressed observation. Observations get summarized at session end. Relevant ones get auto-injected into future prompts so the next session starts with context from the last one — no re-explaining the codebase, no re-discovering decisions.
|
||||
|
||||
## When it kicks in
|
||||
|
||||
Memory injection starts on your second session in a project.
|
||||
|
||||
The first session in a fresh project seeds memory; subsequent sessions receive auto-injected context for relevant past work. Run `/learn-codebase` if you want to front-load the entire repo into memory in a single pass (~5 minutes, optional).
|
||||
|
||||
## Where data lives
|
||||
|
||||
Everything stays in ~/.claude-mem on this machine.
|
||||
|
||||
Nothing leaves your machine except calls to whichever AI provider you configured for compression (Claude / OpenRouter / Gemini). The SQLite database, vector index, logs, and settings all live under that directory and are removed cleanly on `npx claude-mem uninstall`.
|
||||
17
skills/how-it-works/onboarding-explainer.md
Normal file
17
skills/how-it-works/onboarding-explainer.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# How claude-mem works
|
||||
|
||||
## What it does
|
||||
|
||||
Every Read, Edit, and Bash that Claude makes turns into a compressed observation. Observations get summarized at session end. Relevant ones get auto-injected into future prompts so the next session starts with context from the last one — no re-explaining the codebase, no re-discovering decisions.
|
||||
|
||||
## When it kicks in
|
||||
|
||||
Memory injection starts on your second session in a project.
|
||||
|
||||
The first session in a fresh project seeds memory; subsequent sessions receive auto-injected context for relevant past work. Run `/learn-codebase` if you want to front-load the entire repo into memory in a single pass (~5 minutes, optional).
|
||||
|
||||
## Where data lives
|
||||
|
||||
Everything stays in ~/.claude-mem on this machine.
|
||||
|
||||
Nothing leaves your machine except calls to whichever AI provider you configured for compression (Claude / OpenRouter / Gemini). The SQLite database, vector index, logs, and settings all live under that directory and are removed cleanly on `npx claude-mem uninstall`.
|
||||
80
skills/knowledge-agent/SKILL.md
Normal file
80
skills/knowledge-agent/SKILL.md
Normal file
@@ -0,0 +1,80 @@
|
||||
---
|
||||
name: knowledge-agent
|
||||
description: Build and query AI-powered knowledge bases from claude-mem observations. Use when users want to create focused "brains" from their observation history, ask questions about past work patterns, or compile expertise on specific topics.
|
||||
---
|
||||
|
||||
# Knowledge Agent
|
||||
|
||||
Build and query AI-powered knowledge bases from claude-mem observations.
|
||||
|
||||
## What Are Knowledge Agents?
|
||||
|
||||
Knowledge agents are filtered corpora of observations compiled into a conversational AI session. Build a corpus from your observation history, prime it (loads the knowledge into an AI session), then ask it questions conversationally.
|
||||
|
||||
Think of them as custom "brains": "everything about hooks", "all decisions from the last month", "all bugfixes for the worker service".
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Build a corpus
|
||||
|
||||
```text
|
||||
build_corpus name="hooks-expertise" description="Everything about the hooks lifecycle" project="claude-mem" concepts="hooks" limit=500
|
||||
```
|
||||
|
||||
Filter options:
|
||||
- `project` — filter by project name
|
||||
- `types` — comma-separated: decision, bugfix, feature, refactor, discovery, change
|
||||
- `concepts` — comma-separated concept tags
|
||||
- `files` — comma-separated file paths (prefix match)
|
||||
- `query` — semantic search query
|
||||
- `dateStart` / `dateEnd` — ISO date range
|
||||
- `limit` — max observations (default 500)
|
||||
|
||||
### Step 2: Prime the corpus
|
||||
|
||||
```text
|
||||
prime_corpus name="hooks-expertise"
|
||||
```
|
||||
|
||||
This creates an AI session loaded with all the corpus knowledge. Takes a moment for large corpora.
|
||||
|
||||
### Step 3: Query
|
||||
|
||||
```text
|
||||
query_corpus name="hooks-expertise" question="What are the 5 lifecycle hooks and when does each fire?"
|
||||
```
|
||||
|
||||
The knowledge agent answers from its corpus. Follow-up questions maintain context.
|
||||
|
||||
### Step 4: List corpora
|
||||
|
||||
```text
|
||||
list_corpora
|
||||
```
|
||||
|
||||
Shows all corpora with stats and priming status.
|
||||
|
||||
## Tips
|
||||
|
||||
- **Focused corpora work best** — "hooks architecture" beats "everything ever"
|
||||
- **Prime once, query many times** — the session persists across queries
|
||||
- **Reprime for fresh context** — if the conversation drifts, reprime to reset
|
||||
- **Rebuild to update** — when new observations are added, rebuild then reprime
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Rebuild a corpus (refresh with new observations)
|
||||
|
||||
```text
|
||||
rebuild_corpus name="hooks-expertise"
|
||||
```
|
||||
|
||||
After rebuilding, reprime to load the updated knowledge:
|
||||
|
||||
### Reprime (fresh session)
|
||||
|
||||
```text
|
||||
reprime_corpus name="hooks-expertise"
|
||||
```
|
||||
|
||||
Clears prior Q&A context and reloads the corpus into a new session.
|
||||
21
skills/learn-codebase/SKILL.md
Normal file
21
skills/learn-codebase/SKILL.md
Normal file
@@ -0,0 +1,21 @@
|
||||
---
|
||||
name: learn-codebase
|
||||
description: Prime a codebase by reading every source file in full. Use when starting work on a new or unfamiliar project, or when the user asks to "learn the codebase", "read the codebase", "prime", or "get up to speed".
|
||||
---
|
||||
|
||||
# Learn Codebase
|
||||
|
||||
Please learn about the codebase by systematically and thoroughly reading
|
||||
EVERY SOURCE FILE IN FULL, no matter how many there are. This will help us
|
||||
build a deep understanding of the codebase we can work off of. This is
|
||||
critical and non negotiable.
|
||||
|
||||
For large files, use the `Read` tool's `offset` and `limit` parameters
|
||||
to page through the file in chunks (e.g. `offset: 1, limit: 500`, then
|
||||
`offset: 501, limit: 500`).
|
||||
|
||||
## Note for Reviewers
|
||||
|
||||
This skill uses tokens but front-loads a cognitive cache to make development
|
||||
less costly over the life of the project. Please keep this in mind before
|
||||
deciding to warn the user over cost.
|
||||
63
skills/make-plan/SKILL.md
Normal file
63
skills/make-plan/SKILL.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
name: make-plan
|
||||
description: Create a detailed, phased implementation plan with documentation discovery. Use when asked to plan a feature, task, or multi-step implementation — especially before executing with do.
|
||||
---
|
||||
|
||||
# Make Plan
|
||||
|
||||
You are an ORCHESTRATOR. Create an LLM-friendly plan in phases that can be executed consecutively in new chat contexts.
|
||||
|
||||
## Delegation Model
|
||||
|
||||
Use subagents for *fact gathering and extraction* (docs, examples, signatures, grep results). Keep *synthesis and plan authoring* with the orchestrator (phase boundaries, task framing, final wording). If a subagent report is incomplete or lacks evidence, re-check with targeted reads/greps before finalizing.
|
||||
|
||||
### Subagent Reporting Contract (MANDATORY)
|
||||
|
||||
Each subagent response must include:
|
||||
1. Sources consulted (files/URLs) and what was read
|
||||
2. Concrete findings (exact API names/signatures; exact file paths/locations)
|
||||
3. Copy-ready snippet locations (example files/sections to copy)
|
||||
4. "Confidence" note + known gaps (what might still be missing)
|
||||
|
||||
Reject and redeploy the subagent if it reports conclusions without sources.
|
||||
|
||||
## Plan Structure
|
||||
|
||||
### Phase 0: Documentation Discovery (ALWAYS FIRST)
|
||||
|
||||
Before planning implementation, deploy "Documentation Discovery" subagents to:
|
||||
1. Search for and read relevant documentation, examples, and existing patterns
|
||||
2. Identify the actual APIs, methods, and signatures available (not assumed)
|
||||
3. Create a brief "Allowed APIs" list citing specific documentation sources
|
||||
4. Note any anti-patterns to avoid (methods that DON'T exist, deprecated parameters)
|
||||
|
||||
The orchestrator consolidates findings into a single Phase 0 output.
|
||||
|
||||
### Each Implementation Phase Must Include
|
||||
|
||||
1. **What to implement** — Frame tasks to COPY from docs, not transform existing code
|
||||
- Good: "Copy the V2 session pattern from docs/examples.ts:45-60"
|
||||
- Bad: "Migrate the existing code to V2"
|
||||
2. **Documentation references** — Cite specific files/lines for patterns to follow
|
||||
3. **Verification checklist** — How to prove this phase worked (tests, grep checks)
|
||||
4. **Anti-pattern guards** — What NOT to do (invented APIs, undocumented params)
|
||||
|
||||
### Final Phase: Verification
|
||||
|
||||
1. Verify all implementations match documentation
|
||||
2. Check for anti-patterns (grep for known bad patterns)
|
||||
3. Run tests to confirm functionality
|
||||
|
||||
## Key Principles
|
||||
|
||||
- Documentation Availability ≠ Usage: Explicitly require reading docs
|
||||
- Task Framing Matters: Direct agents to docs, not just outcomes
|
||||
- Verify > Assume: Require proof, not assumptions about APIs
|
||||
- Session Boundaries: Each phase should be self-contained with its own doc references
|
||||
|
||||
## Anti-Patterns to Prevent
|
||||
|
||||
- Inventing API methods that "should" exist
|
||||
- Adding parameters not in documentation
|
||||
- Skipping verification steps
|
||||
- Assuming structure without checking examples
|
||||
131
skills/mem-search/SKILL.md
Normal file
131
skills/mem-search/SKILL.md
Normal file
@@ -0,0 +1,131 @@
|
||||
---
|
||||
name: mem-search
|
||||
description: Search claude-mem's persistent cross-session memory database. Use when user asks "did we already solve this?", "how did we do X last time?", or needs work from previous sessions.
|
||||
---
|
||||
|
||||
# Memory Search
|
||||
|
||||
Search past work across all sessions. Simple workflow: search -> filter -> fetch.
|
||||
|
||||
## When to Use
|
||||
|
||||
Use when users ask about PREVIOUS sessions (not current conversation):
|
||||
|
||||
- "Did we already fix this?"
|
||||
- "How did we solve X last time?"
|
||||
- "What happened last week?"
|
||||
|
||||
## 3-Layer Workflow (ALWAYS Follow)
|
||||
|
||||
**NEVER fetch full details without filtering first. 10x token savings.**
|
||||
|
||||
### Step 1: Search - Get Index with IDs
|
||||
|
||||
Use the `search` MCP tool:
|
||||
|
||||
```
|
||||
search(query="authentication", limit=20, project="my-project")
|
||||
```
|
||||
|
||||
**Returns:** Table with IDs, timestamps, types, titles (~50-100 tokens/result)
|
||||
|
||||
```
|
||||
| ID | Time | T | Title | Read |
|
||||
|----|------|---|-------|------|
|
||||
| #11131 | 3:48 PM | 🟣 | Added JWT authentication | ~75 |
|
||||
| #10942 | 2:15 PM | 🔴 | Fixed auth token expiration | ~50 |
|
||||
```
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `query` (string) - Search term
|
||||
- `limit` (number) - Max results, default 20, max 100
|
||||
- `project` (string) - Project name filter
|
||||
- `type` (string, optional) - "observations", "sessions", or "prompts"
|
||||
- `obs_type` (string, optional) - Comma-separated: bugfix, feature, decision, discovery, change
|
||||
- `dateStart` (string, optional) - YYYY-MM-DD or epoch ms
|
||||
- `dateEnd` (string, optional) - YYYY-MM-DD or epoch ms
|
||||
- `offset` (number, optional) - Skip N results
|
||||
- `orderBy` (string, optional) - "date_desc" (default), "date_asc", "relevance"
|
||||
|
||||
### Step 2: Timeline - Get Context Around Interesting Results
|
||||
|
||||
Use the `timeline` MCP tool:
|
||||
|
||||
```
|
||||
timeline(anchor=11131, depth_before=3, depth_after=3, project="my-project")
|
||||
```
|
||||
|
||||
Or find anchor automatically from query:
|
||||
|
||||
```
|
||||
timeline(query="authentication", depth_before=3, depth_after=3, project="my-project")
|
||||
```
|
||||
|
||||
**Returns:** `depth_before + 1 + depth_after` items in chronological order with observations, sessions, and prompts interleaved around the anchor.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `anchor` (number, optional) - Observation ID to center around
|
||||
- `query` (string, optional) - Find anchor automatically if anchor not provided
|
||||
- `depth_before` (number, optional) - Items before anchor, default 5, max 20
|
||||
- `depth_after` (number, optional) - Items after anchor, default 5, max 20
|
||||
- `project` (string) - Project name filter
|
||||
|
||||
### Step 3: Fetch - Get Full Details ONLY for Filtered IDs
|
||||
|
||||
Review titles from Step 1 and context from Step 2. Pick relevant IDs. Discard the rest.
|
||||
|
||||
Use the `get_observations` MCP tool:
|
||||
|
||||
```
|
||||
get_observations(ids=[11131, 10942])
|
||||
```
|
||||
|
||||
**ALWAYS use `get_observations` for 2+ observations - single request vs N requests.**
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `ids` (array of numbers, required) - Observation IDs to fetch
|
||||
- `orderBy` (string, optional) - "date_desc" (default), "date_asc"
|
||||
- `limit` (number, optional) - Max observations to return
|
||||
- `project` (string, optional) - Project name filter
|
||||
|
||||
**Returns:** Complete observation objects with title, subtitle, narrative, facts, concepts, files (~500-1000 tokens each)
|
||||
|
||||
## Examples
|
||||
|
||||
**Find recent bug fixes:**
|
||||
|
||||
```
|
||||
search(query="bug", type="observations", obs_type="bugfix", limit=20, project="my-project")
|
||||
```
|
||||
|
||||
**Find what happened last week:**
|
||||
|
||||
```
|
||||
search(type="observations", dateStart="2025-11-11", limit=20, project="my-project")
|
||||
```
|
||||
|
||||
**Understand context around a discovery:**
|
||||
|
||||
```
|
||||
timeline(anchor=11131, depth_before=5, depth_after=5, project="my-project")
|
||||
```
|
||||
|
||||
**Batch fetch details:**
|
||||
|
||||
```
|
||||
get_observations(ids=[11131, 10942, 10855], orderBy="date_desc")
|
||||
```
|
||||
|
||||
## Why This Workflow?
|
||||
|
||||
- **Search index:** ~50-100 tokens per result
|
||||
- **Full observation:** ~500-1000 tokens each
|
||||
- **Batch fetch:** 1 HTTP request vs N individual requests
|
||||
- **10x token savings** by filtering before fetching
|
||||
|
||||
## Knowledge Agents
|
||||
|
||||
Want synthesized answers instead of raw records? Use `/knowledge-agent` to build a queryable corpus from your observation history. The knowledge agent reads all matching observations and answers questions conversationally.
|
||||
111
skills/pathfinder/SKILL.md
Normal file
111
skills/pathfinder/SKILL.md
Normal file
@@ -0,0 +1,111 @@
|
||||
---
|
||||
name: pathfinder
|
||||
description: Map a codebase into feature-grouped flowcharts, identify duplicated concerns across features, and propose a unified architecture. Use when asked to "find the ideal path," unify duplicated systems, or audit architecture before a refactor. Emits a proposed unified flowchart plus per-system /make-plan prompts.
|
||||
---
|
||||
|
||||
# Pathfinder
|
||||
|
||||
You are an ORCHESTRATOR. Map the codebase into feature-grouped flowcharts, identify duplicated concerns, propose the simplest unified architecture, and hand off per-system plans to `/make-plan`.
|
||||
|
||||
You do not write implementation code. You produce diagrams, a duplication report, a proposed unified flowchart, and handoff prompts.
|
||||
|
||||
## Delegation Model
|
||||
|
||||
Use subagents for *discovery and extraction* (file reading, flow tracing, grep, diagramming). Keep *synthesis* (deciding feature boundaries, picking unification strategies, final flowchart) with the orchestrator. Reject subagent reports that lack source citations and redeploy.
|
||||
|
||||
### Subagent Reporting Contract (MANDATORY)
|
||||
|
||||
Each subagent response must include:
|
||||
1. Sources consulted — exact file paths and line ranges read
|
||||
2. Concrete findings — exact function names, call sites, data flow
|
||||
3. Mermaid diagram(s) with nodes labeled by `file:line`
|
||||
4. Confidence note + known gaps
|
||||
|
||||
## Output Artifacts
|
||||
|
||||
All artifacts go in `PATHFINDER-<YYYY-MM-DD>/` at repo root:
|
||||
- `00-features.md` — feature inventory with boundaries
|
||||
- `01-flowcharts/<feature>.md` — one Mermaid flowchart per feature
|
||||
- `02-duplication-report.md` — cross-cutting duplicated concerns with evidence
|
||||
- `03-unified-proposal.md` — proposed unified architecture + Mermaid
|
||||
- `04-handoff-prompts.md` — copy-pasteable `/make-plan` prompts per unified system
|
||||
|
||||
## Phases
|
||||
|
||||
### Phase 0: Feature Discovery (ALWAYS FIRST)
|
||||
|
||||
Deploy ONE "Feature Discovery" subagent to:
|
||||
1. Walk the source tree (not built artifacts) and read top-level README / CLAUDE.md
|
||||
2. Propose feature boundaries based on directory structure, import graph, and naming
|
||||
3. Return a flat list of features with: name, entry points (file:line), core files, brief purpose
|
||||
|
||||
Orchestrator reviews the proposal, adjusts boundaries if needed, writes `00-features.md`. Do NOT fan out until feature boundaries are approved.
|
||||
|
||||
### Phase 1: Per-Feature Flowcharts (FAN OUT)
|
||||
|
||||
Deploy ONE "Flowchart" subagent per feature in parallel. Each receives only its feature's scope. Each must:
|
||||
1. Trace the feature's primary happy path from entry point to terminal state
|
||||
2. Identify side effects (DB writes, HTTP calls, file I/O, process spawns)
|
||||
3. Note error and fallback branches but do not let them dominate the diagram
|
||||
4. Produce a Mermaid `flowchart TD` with every node labeled `Name<br/>file:line`
|
||||
5. List external dependencies (other features it calls into) at the bottom
|
||||
|
||||
Orchestrator writes each flowchart to `01-flowcharts/<feature>.md`. Reject any diagram missing `file:line` labels.
|
||||
|
||||
### Phase 2: Duplication Hunt
|
||||
|
||||
Deploy TWO subagents in parallel:
|
||||
|
||||
**"Within-Feature Duplication"** subagent:
|
||||
- For each feature, find repeated code/logic patterns inside the feature only
|
||||
- Report only duplications worth consolidating (ignore trivial repetition)
|
||||
|
||||
**"Cross-Feature Duplication"** subagent:
|
||||
- Compare flowcharts across features for concerns that appear in multiple places
|
||||
- Examples of what to look for: multiple capture paths, parallel queue implementations, duplicated storage/migration code, repeated agent scaffolding, parallel parsing layers
|
||||
- For each duplication, report: (a) the concern, (b) every location with `file:line`, (c) why they diverged, (d) whether the divergence is legitimate specialization or accidental
|
||||
|
||||
Orchestrator synthesizes both into `02-duplication-report.md`. Every duplication claim must cite ≥2 `file:line` locations.
|
||||
|
||||
### Phase 3: Unified Proposal (ORCHESTRATOR)
|
||||
|
||||
The orchestrator writes `03-unified-proposal.md` itself — do not delegate synthesis.
|
||||
|
||||
For each duplicated concern from Phase 2 that is NOT legitimate specialization:
|
||||
1. Propose the simplest unified design (one path, one store, one handler — whatever applies)
|
||||
2. Name the consolidated component and its single entry point
|
||||
3. Show what each old call site becomes
|
||||
4. Call out any loss of capability and whether it's acceptable
|
||||
|
||||
End the document with ONE combined Mermaid flowchart showing the proposed unified system. Nodes still labeled with target `file:line` (new or existing) where knowable.
|
||||
|
||||
**Anti-patterns to reject in your own proposal:**
|
||||
- Adding a new abstraction layer "for flexibility"
|
||||
- Keeping both old paths behind a feature flag
|
||||
- Introducing a registry/factory when a switch statement suffices
|
||||
- Preserving divergent behavior "just in case"
|
||||
|
||||
### Phase 4: Per-System Handoff Prompts
|
||||
|
||||
For each unified system in the proposal, write a ready-to-run `/make-plan` prompt to `04-handoff-prompts.md`. Each prompt must:
|
||||
1. State the target unified component and its single entry point
|
||||
2. List the exact call sites to rewrite (from Phase 2 evidence)
|
||||
3. Cite the relevant flowchart file from `01-flowcharts/`
|
||||
4. Include anti-pattern guards specific to this system
|
||||
|
||||
Format each as a fenced code block the user can copy directly into `/make-plan`.
|
||||
|
||||
## Key Principles
|
||||
|
||||
- **Evidence over intuition** — every diagram node and duplication claim cites `file:line`
|
||||
- **Current state before ideal state** — Phases 0–2 describe what IS; Phase 3 describes what SHOULD BE
|
||||
- **Simplest unification wins** — prefer deletion over abstraction; prefer one path over configurable paths
|
||||
- **Specialization is not duplication** — two components serving different trust models or data sources are legitimate even if their code looks similar
|
||||
- **Handoff, don't implement** — Pathfinder ends at plan prompts; `/make-plan` and `/do` take it from there
|
||||
|
||||
## Failure Modes to Prevent
|
||||
|
||||
- Drawing flowcharts from memory instead of source — redeploy subagent with grep evidence requirement
|
||||
- Proposing unification of legitimately specialized components — re-examine trust/data-source divergence
|
||||
- Handoff prompts that lack concrete call sites — rewrite with Phase 2 evidence
|
||||
- Skipping Phase 0 boundary review — fanning out on bad feature boundaries wastes all of Phase 1
|
||||
190
skills/smart-explore/SKILL.md
Normal file
190
skills/smart-explore/SKILL.md
Normal file
@@ -0,0 +1,190 @@
|
||||
---
|
||||
name: smart-explore
|
||||
description: Token-optimized structural code search using tree-sitter AST parsing. Use instead of reading full files when you need to understand code structure, find functions, or explore a codebase efficiently.
|
||||
---
|
||||
|
||||
# Smart Explore
|
||||
|
||||
Structural code exploration using AST parsing. **This skill overrides your default exploration behavior.** While this skill is active, use smart_search/smart_outline/smart_unfold as your primary tools instead of Read, Grep, and Glob.
|
||||
|
||||
**Core principle:** Index first, fetch on demand. Give yourself a map of the code before loading implementation details. The question before every file read should be: "do I need to see all of this, or can I get a structural overview first?" The answer is almost always: get the map.
|
||||
|
||||
## Your Next Tool Call
|
||||
|
||||
This skill only loads instructions. You must call the MCP tools yourself. Your next action should be one of:
|
||||
|
||||
```
|
||||
smart_search(query="<topic>", path="./src") -- discover files + symbols across a directory
|
||||
smart_outline(file_path="<file>") -- structural skeleton of one file
|
||||
smart_unfold(file_path="<file>", symbol_name="<name>") -- full source of one symbol
|
||||
```
|
||||
|
||||
Do NOT run Grep, Glob, Read, or find to discover files first. `smart_search` walks directories, parses all code files, and returns ranked symbols in one call. It replaces the Glob → Grep → Read discovery cycle.
|
||||
|
||||
## 3-Layer Workflow
|
||||
|
||||
### Step 1: Search -- Discover Files and Symbols
|
||||
|
||||
```
|
||||
smart_search(query="shutdown", path="./src", max_results=15)
|
||||
```
|
||||
|
||||
**Returns:** Ranked symbols with signatures, line numbers, match reasons, plus folded file views (~2-6k tokens)
|
||||
|
||||
```
|
||||
-- Matching Symbols --
|
||||
function performGracefulShutdown (services/infrastructure/GracefulShutdown.ts:56)
|
||||
function httpShutdown (services/infrastructure/HealthMonitor.ts:92)
|
||||
method WorkerService.shutdown (services/worker-service.ts:846)
|
||||
|
||||
-- Folded File Views --
|
||||
services/infrastructure/GracefulShutdown.ts (7 symbols)
|
||||
services/worker-service.ts (12 symbols)
|
||||
```
|
||||
|
||||
This is your discovery tool. It finds relevant files AND shows their structure. No Glob/find pre-scan needed.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `query` (string, required) -- What to search for (function name, concept, class name)
|
||||
- `path` (string) -- Root directory to search (defaults to cwd)
|
||||
- `max_results` (number) -- Max matching symbols, default 20, max 50
|
||||
- `file_pattern` (string, optional) -- Filter to specific files/paths
|
||||
|
||||
### Step 2: Outline -- Get File Structure
|
||||
|
||||
```
|
||||
smart_outline(file_path="services/worker-service.ts")
|
||||
```
|
||||
|
||||
**Returns:** Complete structural skeleton -- all functions, classes, methods, properties, imports (~1-2k tokens per file)
|
||||
|
||||
**Skip this step** when Step 1's folded file views already provide enough structure. Most useful for files not covered by the search results.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `file_path` (string, required) -- Path to the file
|
||||
|
||||
### Step 3: Unfold -- See Implementation
|
||||
|
||||
Review symbols from Steps 1-2. Pick the ones you need. Unfold only those:
|
||||
|
||||
```
|
||||
smart_unfold(file_path="services/worker-service.ts", symbol_name="shutdown")
|
||||
```
|
||||
|
||||
**Returns:** Full source code of the specified symbol including JSDoc, decorators, and complete implementation (~400-2,100 tokens depending on symbol size). AST node boundaries guarantee completeness regardless of symbol size — unlike Read + agent summarization, which may truncate long methods.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
- `file_path` (string, required) -- Path to the file (as returned by search/outline)
|
||||
- `symbol_name` (string, required) -- Name of the function/class/method to expand
|
||||
|
||||
## When to Use Standard Tools Instead
|
||||
|
||||
Use these only when smart_* tools are the wrong fit:
|
||||
|
||||
- **Grep:** Exact string/regex search ("find all TODO comments", "where is `ensureWorkerStarted` defined?")
|
||||
- **Read:** Small files under ~100 lines, non-code files (JSON, markdown, config)
|
||||
- **Glob:** File path patterns ("find all test files")
|
||||
- **Explore agent:** When you need synthesized understanding across 6+ files, architecture narratives, or answers to open-ended questions like "how does this entire system work end-to-end?" Smart-explore is a scalpel — it answers "where is this?" and "show me that." It doesn't synthesize cross-file data flows, design decisions, or edge cases across an entire feature.
|
||||
|
||||
For code files over ~100 lines, prefer smart_outline + smart_unfold over Read.
|
||||
|
||||
## Workflow Examples
|
||||
|
||||
**Discover how a feature works (cross-cutting):**
|
||||
|
||||
```
|
||||
1. smart_search(query="shutdown", path="./src")
|
||||
-> 14 symbols across 7 files, full picture in one call
|
||||
2. smart_unfold(file_path="services/infrastructure/GracefulShutdown.ts", symbol_name="performGracefulShutdown")
|
||||
-> See the core implementation
|
||||
```
|
||||
|
||||
**Navigate a large file:**
|
||||
|
||||
```
|
||||
1. smart_outline(file_path="services/worker-service.ts")
|
||||
-> 1,466 tokens: 12 functions, WorkerService class with 24 members
|
||||
2. smart_unfold(file_path="services/worker-service.ts", symbol_name="startSessionProcessor")
|
||||
-> 1,610 tokens: the specific method you need
|
||||
Total: ~3,076 tokens vs ~12,000 to Read the full file
|
||||
```
|
||||
|
||||
**Write documentation about code (hybrid workflow):**
|
||||
|
||||
```
|
||||
1. smart_search(query="feature name", path="./src") -- discover all relevant files and symbols
|
||||
2. smart_outline on key files -- understand structure
|
||||
3. smart_unfold on important functions -- get implementation details
|
||||
4. Read on small config/markdown/plan files -- get non-code context
|
||||
```
|
||||
|
||||
Use smart_* tools for code exploration, Read for non-code files. Mix freely.
|
||||
|
||||
**Exploration then precision:**
|
||||
|
||||
```
|
||||
1. smart_search(query="session", path="./src", max_results=10)
|
||||
-> 10 ranked symbols: SessionMetadata, SessionQueueProcessor, SessionSummary...
|
||||
2. Pick the relevant one, unfold it
|
||||
```
|
||||
|
||||
## Token Economics
|
||||
|
||||
| Approach | Tokens | Use Case |
|
||||
|----------|--------|----------|
|
||||
| smart_outline | ~1,000-2,000 | "What's in this file?" |
|
||||
| smart_unfold | ~400-2,100 | "Show me this function" |
|
||||
| smart_search | ~2,000-6,000 | "Find all X across the codebase" |
|
||||
| search + unfold | ~3,000-8,000 | End-to-end: find and read (the primary workflow) |
|
||||
| Read (full file) | ~12,000+ | When you truly need everything |
|
||||
| Explore agent | ~39,000-59,000 | Cross-file synthesis with narrative |
|
||||
|
||||
**4-8x savings** on file understanding (outline + unfold vs Read). **11-18x savings** on codebase exploration vs Explore agent. The narrower the query, the wider the gap — a 27-line function costs 55x less to read via unfold than via an Explore agent, because the agent still reads the entire file.
|
||||
|
||||
## Language Support
|
||||
|
||||
Smart-explore uses **tree-sitter AST parsing** for structural analysis. Unsupported file types fall back to text-based search.
|
||||
|
||||
### Bundled Languages
|
||||
|
||||
| Language | Extensions |
|
||||
|----------|-----------|
|
||||
| JavaScript | `.js`, `.mjs`, `.cjs` |
|
||||
| TypeScript | `.ts` |
|
||||
| TSX / JSX | `.tsx`, `.jsx` |
|
||||
| Python | `.py`, `.pyw` |
|
||||
| Go | `.go` |
|
||||
| Rust | `.rs` |
|
||||
| Ruby | `.rb` |
|
||||
| Java | `.java` |
|
||||
| C | `.c`, `.h` |
|
||||
| C++ | `.cpp`, `.cc`, `.cxx`, `.hpp`, `.hh` |
|
||||
|
||||
Files with unrecognized extensions are parsed as plain text — `smart_search` still works (grep-style), but `smart_outline` and `smart_unfold` will not extract structured symbols.
|
||||
|
||||
### Custom Grammars (`.claude-mem.json`)
|
||||
|
||||
You can register additional tree-sitter grammars for file types not in the bundled list. Create or update `.claude-mem.json` in your project root:
|
||||
|
||||
```json
|
||||
{
|
||||
"grammars": {
|
||||
".sol": "tree-sitter-solidity",
|
||||
".graphql": "tree-sitter-graphql"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Each key is a file extension; each value is the npm package name of the tree-sitter grammar. The grammar must be installed locally (`npm install tree-sitter-solidity`). Once registered, `smart_outline` and `smart_unfold` will parse those extensions structurally instead of falling back to plain text.
|
||||
|
||||
### Markdown Special Support
|
||||
|
||||
Markdown files (`.md`, `.mdx`) receive special handling beyond the generic plain-text fallback:
|
||||
|
||||
- **`smart_outline`** — extracts headings (`#`, `##`, `###`) as the symbol tree. Use it to navigate long documents without reading the full file.
|
||||
- **`smart_search`** — searches within code fences as well as prose, so queries for function names inside ` ```ts ``` ` blocks work as expected.
|
||||
- **`smart_unfold`** — expands heading sections rather than function bodies; each section up to the next same-level heading is returned as a chunk.
|
||||
- **Frontmatter** — YAML frontmatter (lines between leading `---` delimiters) is included in `smart_outline` output under a synthetic `frontmatter` symbol so metadata like `title:` and `description:` is visible without reading the whole file.
|
||||
211
skills/timeline-report/SKILL.md
Normal file
211
skills/timeline-report/SKILL.md
Normal file
@@ -0,0 +1,211 @@
|
||||
---
|
||||
name: timeline-report
|
||||
description: Generate a "Journey Into [Project]" narrative report analyzing a project's entire development history from claude-mem's timeline. Use when asked for a timeline report, project history analysis, development journey, or full project report.
|
||||
---
|
||||
|
||||
# Timeline Report
|
||||
|
||||
Generate a comprehensive narrative analysis of a project's entire development history using claude-mem's persistent memory timeline.
|
||||
|
||||
## When to Use
|
||||
|
||||
Use when users ask for:
|
||||
|
||||
- "Write a timeline report"
|
||||
- "Journey into [project]"
|
||||
- "Analyze my project history"
|
||||
- "Full project report"
|
||||
- "Summarize the entire development history"
|
||||
- "What's the story of this project?"
|
||||
|
||||
## Prerequisites
|
||||
|
||||
The claude-mem worker must be running. The project must have claude-mem observations recorded.
|
||||
|
||||
**Resolve the worker port** (do this once at the start and reuse `$WORKER_PORT` in every curl call below):
|
||||
|
||||
```bash
|
||||
WORKER_PORT="${CLAUDE_MEM_WORKER_PORT:-$(node -e "const fs=require('fs'),p=require('path'),os=require('os');const uid=(typeof process.getuid==='function'?process.getuid():77);const fallback=String(37700+(uid%100));try{const s=JSON.parse(fs.readFileSync(p.join(os.homedir(),'.claude-mem','settings.json'),'utf-8'));process.stdout.write(String(s.CLAUDE_MEM_WORKER_PORT||fallback));}catch{process.stdout.write(fallback);}" 2>/dev/null)}"
|
||||
```
|
||||
|
||||
This honors `CLAUDE_MEM_WORKER_PORT` env, then `~/.claude-mem/settings.json`, then falls back to the per-UID default `37700 + (uid % 100)` — matching how the worker itself picks its port. Required for multi-account setups (#2101) and any user who has overridden the default port (#2103).
|
||||
|
||||
## Workflow
|
||||
|
||||
### Step 1: Determine the Project Name
|
||||
|
||||
Ask the user which project to analyze if not obvious from context. The project name is typically the directory name of the project (e.g., "tokyo", "my-app"). If the user says "this project", use the current working directory's basename.
|
||||
|
||||
**Worktree Detection:** Before using the directory basename, check if the current directory is a git worktree. In a worktree, the data source is the **parent project**, not the worktree directory itself. Run:
|
||||
|
||||
```bash
|
||||
git_dir=$(git rev-parse --git-dir 2>/dev/null)
|
||||
git_common_dir=$(git rev-parse --git-common-dir 2>/dev/null)
|
||||
if [ "$git_dir" != "$git_common_dir" ]; then
|
||||
# We're in a worktree — resolve the parent project name
|
||||
parent_project=$(basename "$(dirname "$git_common_dir")")
|
||||
echo "Worktree detected. Parent project: $parent_project"
|
||||
else
|
||||
parent_project=$(basename "$PWD")
|
||||
fi
|
||||
echo "$parent_project"
|
||||
```
|
||||
|
||||
If a worktree is detected, use `$parent_project` (the basename of the parent repo) as the project name for all API calls. Inform the user: "Detected git worktree. Using parent project '[name]' as the data source."
|
||||
|
||||
### Step 2: Fetch the Full Timeline
|
||||
|
||||
Use Bash to fetch the complete timeline from the claude-mem worker API:
|
||||
|
||||
```bash
|
||||
curl -s "http://localhost:${WORKER_PORT}/api/context/inject?project=PROJECT_NAME&full=true"
|
||||
```
|
||||
|
||||
This returns the entire compressed timeline -- every observation, session boundary, and summary across the project's full history. The response is pre-formatted markdown optimized for LLM consumption.
|
||||
|
||||
**Token estimates:** The full timeline size depends on the project's history:
|
||||
- Small project (< 1,000 observations): ~20-50K tokens
|
||||
- Medium project (1,000-10,000 observations): ~50-300K tokens
|
||||
- Large project (10,000-35,000 observations): ~300-750K tokens
|
||||
|
||||
If the response is empty or returns an error, the worker may not be running or the project name may be wrong. Try `curl -s "http://localhost:${WORKER_PORT}/api/search?query=*&limit=1"` to verify the worker is healthy.
|
||||
|
||||
### Step 3: Estimate Token Count
|
||||
|
||||
Before proceeding, estimate the token count of the fetched timeline (roughly 1 token per 4 characters). Report this to the user:
|
||||
|
||||
```
|
||||
Timeline fetched: ~X observations, estimated ~Yk tokens.
|
||||
This analysis will consume approximately Yk input tokens + ~5-10k output tokens.
|
||||
Proceed? (y/n)
|
||||
```
|
||||
|
||||
Wait for user confirmation before continuing if the timeline exceeds 100K tokens.
|
||||
|
||||
### Step 4: Analyze with a Subagent
|
||||
|
||||
Deploy an Agent (using the Task tool) with the full timeline and the following analysis prompt. Pass the ENTIRE timeline as context to the agent. The agent should also be instructed to query the SQLite database at `~/.claude-mem/claude-mem.db` for the Token Economics section.
|
||||
|
||||
**Agent prompt:**
|
||||
|
||||
```
|
||||
You are a technical historian analyzing a software project's complete development timeline from claude-mem's persistent memory system. The timeline below contains every observation, session boundary, and summary recorded across the project's entire history.
|
||||
|
||||
You also have access to the claude-mem SQLite database at ~/.claude-mem/claude-mem.db. Use it to run queries for the Token Economics & Memory ROI section. The database has an "observations" table with columns: id, memory_session_id, project, text, type, title, subtitle, facts, narrative, concepts, files_read, files_modified, prompt_number, discovery_tokens, created_at, created_at_epoch, source_tool, source_input_summary.
|
||||
|
||||
Write a comprehensive narrative report titled "Journey Into [PROJECT_NAME]" that covers:
|
||||
|
||||
## Required Sections
|
||||
|
||||
1. **Project Genesis** -- When and how the project started. What were the first commits, the initial vision, the founding technical decisions? What problem was being solved?
|
||||
|
||||
2. **Architectural Evolution** -- How did the architecture change over time? What were the major pivots? Why did they happen? Trace the evolution from initial design through each significant restructuring.
|
||||
|
||||
3. **Key Breakthroughs** -- Identify the "aha" moments: when a difficult problem was finally solved, when a new approach unlocked progress, when a prototype first worked. These are the observations where the tone shifts from investigation to resolution.
|
||||
|
||||
4. **Work Patterns** -- Analyze the rhythm of development. Identify debugging cycles (clusters of bug fixes), feature sprints (rapid observation sequences), refactoring phases (architectural changes without new features), and exploration phases (many discoveries without changes).
|
||||
|
||||
5. **Technical Debt** -- Track where shortcuts were taken and when they were paid back. Identify patterns of accumulation (rapid feature work) and resolution (dedicated refactoring sessions).
|
||||
|
||||
6. **Challenges and Debugging Sagas** -- The hardest problems encountered. Multi-session debugging efforts, architectural dead-ends that required backtracking, platform-specific issues that took days to resolve.
|
||||
|
||||
7. **Memory and Continuity** -- How did persistent memory (claude-mem itself, if applicable) affect the development process? Were there moments where recalled context from prior sessions saved significant time or prevented repeated mistakes?
|
||||
|
||||
8. **Token Economics & Memory ROI** -- Quantitative analysis of how memory recall saved work:
|
||||
- Query the database directly for these metrics using `sqlite3 ~/.claude-mem/claude-mem.db`
|
||||
- Count total discovery_tokens across all observations (the original cost of all work)
|
||||
- Count sessions that had context injection available (sessions after the first)
|
||||
- Calculate the compression ratio: average discovery_tokens vs average read_tokens per observation
|
||||
- Identify the highest-value observations (highest discovery_tokens -- these are the most expensive decisions, bugs, and discoveries that memory prevents re-doing)
|
||||
- Identify explicit recall events (observations where source_tool contains "search", "smart_search", "get_observations", "timeline", or where narrative mentions "recalled", "from memory", "previous session")
|
||||
- Estimate passive recall savings: each session with context injection receives ~50 observations. Use a 30% relevance factor (conservative estimate that 30% of injected context prevents re-work). Savings = sessions_with_context × avg_discovery_value_of_50_obs_window × 0.30
|
||||
- Estimate explicit recall savings: ~10K tokens per explicit recall query
|
||||
- Calculate net ROI: total_savings / total_read_tokens_invested
|
||||
- Present as a table with monthly breakdown
|
||||
- Highlight the top 5 most expensive observations by discovery_tokens -- these represent the highest-value memories in the system (architecture decisions, hard bugs, implementation plans that cost 100K+ tokens to produce originally)
|
||||
|
||||
Use these SQL queries as a starting point:
|
||||
```sql
|
||||
-- Total discovery tokens
|
||||
SELECT SUM(discovery_tokens) FROM observations WHERE project = 'PROJECT_NAME';
|
||||
|
||||
-- Sessions with context available (not the first session)
|
||||
SELECT COUNT(DISTINCT memory_session_id) FROM observations WHERE project = 'PROJECT_NAME';
|
||||
|
||||
-- Average tokens per observation
|
||||
SELECT AVG(discovery_tokens) as avg_discovery, AVG(LENGTH(title || COALESCE(subtitle,'') || COALESCE(narrative,'') || COALESCE(facts,'')) / 4) as avg_read FROM observations WHERE project = 'PROJECT_NAME' AND discovery_tokens > 0;
|
||||
|
||||
-- Top 5 most expensive observations (highest-value memories)
|
||||
SELECT id, title, discovery_tokens FROM observations WHERE project = 'PROJECT_NAME' ORDER BY discovery_tokens DESC LIMIT 5;
|
||||
|
||||
-- Monthly breakdown
|
||||
SELECT strftime('%Y-%m', created_at) as month, COUNT(*) as obs, SUM(discovery_tokens) as total_discovery, COUNT(DISTINCT memory_session_id) as sessions FROM observations WHERE project = 'PROJECT_NAME' GROUP BY month ORDER BY month;
|
||||
|
||||
-- Explicit recall events
|
||||
SELECT COUNT(*) FROM observations WHERE project = 'PROJECT_NAME' AND (source_tool LIKE '%search%' OR source_tool LIKE '%timeline%' OR source_tool LIKE '%get_observations%' OR narrative LIKE '%recalled%' OR narrative LIKE '%from memory%' OR narrative LIKE '%previous session%');
|
||||
```
|
||||
|
||||
9. **Timeline Statistics** -- Quantitative summary:
|
||||
- Date range (first observation to last)
|
||||
- Total observations and sessions
|
||||
- Breakdown by observation type (features, bug fixes, discoveries, decisions, changes)
|
||||
- Most active days/weeks
|
||||
- Longest debugging sessions
|
||||
|
||||
10. **Lessons and Meta-Observations** -- What patterns emerge from the full history? What would a new developer learn about this codebase from reading the timeline? What recurring themes or principles guided development?
|
||||
|
||||
## Writing Style
|
||||
|
||||
- Write as a technical narrative, not a list of bullet points
|
||||
- Use specific observation IDs and timestamps when referencing events (e.g., "On Dec 14 (#26766), the root cause was finally identified...")
|
||||
- Connect events across time -- show how early decisions created later consequences
|
||||
- Be honest about struggles and dead ends, not just successes
|
||||
- Target 3,000-6,000 words depending on project size
|
||||
- Use markdown formatting with headers, emphasis, and code references where appropriate
|
||||
|
||||
## Important
|
||||
|
||||
- Analyze the ENTIRE timeline chronologically -- do not skip early history
|
||||
- Look for narrative arcs: problem -> investigation -> solution
|
||||
- Identify turning points where the project's direction fundamentally changed
|
||||
- Note any observations about the development process itself (tooling, workflow, collaboration patterns)
|
||||
|
||||
Here is the complete project timeline:
|
||||
|
||||
[TIMELINE CONTENT GOES HERE]
|
||||
```
|
||||
|
||||
### Step 5: Save the Report
|
||||
|
||||
Save the agent's output as a markdown file. Default location:
|
||||
|
||||
```
|
||||
./journey-into-PROJECT_NAME.md
|
||||
```
|
||||
|
||||
Or if the user specified a different output path, use that instead.
|
||||
|
||||
### Step 6: Report Completion
|
||||
|
||||
Tell the user:
|
||||
- Where the report was saved
|
||||
- The approximate token cost (input timeline + output report)
|
||||
- The date range covered
|
||||
- Number of observations analyzed
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **Empty timeline:** "No observations found for project 'X'. Check the project name with: `curl -s \"http://localhost:${WORKER_PORT}/api/search?query=*&limit=1\"`"
|
||||
- **Worker not running:** "The claude-mem worker is not responding on port ${WORKER_PORT}. Start it with your usual method or check `ps aux | grep worker-service`."
|
||||
- **Timeline too large:** For projects with 50,000+ observations, the timeline may exceed context limits. Suggest using date range filtering: `curl -s "http://localhost:${WORKER_PORT}/api/context/inject?project=X&full=true"` -- the current endpoint returns all observations; for extremely large projects, the user may want to analyze in time-windowed segments.
|
||||
|
||||
## Example
|
||||
|
||||
User: "Write a journey report for the tokyo project"
|
||||
|
||||
1. Fetch: `curl -s "http://localhost:${WORKER_PORT}/api/context/inject?project=tokyo&full=true"`
|
||||
2. Estimate: "Timeline fetched: ~34,722 observations, estimated ~718K tokens. Proceed?"
|
||||
3. User confirms
|
||||
4. Deploy analysis agent with full timeline
|
||||
5. Save to `./journey-into-tokyo.md`
|
||||
6. Report: "Report saved. Analyzed 34,722 observations spanning Oct 2025 - Mar 2026 (~718K input tokens, ~8K output tokens)."
|
||||
63
skills/version-bump/SKILL.md
Normal file
63
skills/version-bump/SKILL.md
Normal file
@@ -0,0 +1,63 @@
|
||||
---
|
||||
name: claude-code-plugin-release
|
||||
description: Automated semantic versioning and release workflow for Claude Code plugins. Handles version increments across package.json, marketplace.json, plugin.json manifests, npm publishing (so `npx claude-mem@X.Y.Z` resolves), build verification, git tagging, GitHub releases, and changelog generation.
|
||||
---
|
||||
|
||||
# Version Bump & Release Workflow
|
||||
|
||||
**IMPORTANT:** Plan and write detailed release notes before starting.
|
||||
|
||||
**CRITICAL:** Commit EVERYTHING (including build artifacts). At the end of this workflow, NOTHING should be left uncommitted or unpushed. Run `git status` at the end to verify.
|
||||
|
||||
## Preparation
|
||||
|
||||
1. **Analyze**: Determine if the change is **PATCH** (bug fixes), **MINOR** (features), or **MAJOR** (breaking).
|
||||
2. **Environment**: Identify repository owner/name from `git remote -v`.
|
||||
3. **Paths — every file that carries the version string**:
|
||||
- `package.json` — **the npm/npx-published version** (`npx claude-mem@X.Y.Z` resolves from this)
|
||||
- `plugin/package.json` — bundled plugin runtime deps
|
||||
- `.claude-plugin/marketplace.json` — version inside `plugins[0].version`
|
||||
- `.claude-plugin/plugin.json` — top-level Claude-plugin manifest
|
||||
- `plugin/.claude-plugin/plugin.json` — bundled Claude-plugin manifest
|
||||
- `.codex-plugin/plugin.json` — Codex-plugin manifest
|
||||
- `openclaw/openclaw.plugin.json` — OpenClaw plugin manifest
|
||||
|
||||
Verify coverage before editing: `git grep -l "\"version\": \"<OLD>\""` should list all seven. If a new manifest has been added since this doc was last updated, update this list.
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **Update**: Increment the version string in every path above. Do NOT touch `CHANGELOG.md` — it's regenerated.
|
||||
2. **Verify**: `git grep -n "\"version\": \"<NEW>\""` — confirm all seven files match. `git grep -n "\"version\": \"<OLD>\""` — should return zero hits.
|
||||
3. **Build**: `npm run build` to regenerate artifacts.
|
||||
4. **Commit**: `git add -A && git commit -m "chore: bump version to X.Y.Z"`.
|
||||
5. **Tag**: `git tag -a vX.Y.Z -m "Version X.Y.Z"`.
|
||||
6. **Push**: `git push origin main && git push origin vX.Y.Z`.
|
||||
7. **Publish to npm** (this is what makes `npx claude-mem@X.Y.Z` work):
|
||||
```bash
|
||||
npm publish
|
||||
```
|
||||
The `prepublishOnly` script re-runs `npm run build` automatically. Confirm publish succeeded:
|
||||
```bash
|
||||
npm view claude-mem@X.Y.Z version # should print X.Y.Z
|
||||
```
|
||||
Alternative: `npm run release:patch` / `release:minor` / `release:major` invokes `np` and handles tag+push+publish in one shot — use ONLY if you skipped steps 4–6, otherwise `np` will error on the existing tag.
|
||||
8. **GitHub release**: `gh release create vX.Y.Z --title "vX.Y.Z" --notes "RELEASE_NOTES"`.
|
||||
9. **Changelog**: Regenerate via the project's changelog script:
|
||||
```bash
|
||||
npm run changelog:generate
|
||||
```
|
||||
(Runs `node scripts/generate-changelog.js`, which pulls releases from the GitHub API and rewrites `CHANGELOG.md`.)
|
||||
10. **Sync changelog**: Commit and push the updated `CHANGELOG.md`.
|
||||
11. **Notify**: `npm run discord:notify vX.Y.Z` if applicable.
|
||||
12. **Finalize**: `git status` — working tree must be clean.
|
||||
|
||||
## Checklist
|
||||
|
||||
- [ ] All seven config files have matching versions
|
||||
- [ ] `git grep` for old version returns zero hits
|
||||
- [ ] `npm run build` succeeded
|
||||
- [ ] Git tag created and pushed
|
||||
- [ ] **`npm publish` succeeded and `npm view claude-mem@X.Y.Z version` confirms it** (so `npx claude-mem@X.Y.Z` resolves)
|
||||
- [ ] GitHub release created with notes
|
||||
- [ ] `CHANGELOG.md` updated and pushed
|
||||
- [ ] `git status` shows clean tree
|
||||
34
skills/version-bump/scripts/generate_changelog.js
Executable file
34
skills/version-bump/scripts/generate_changelog.js
Executable file
@@ -0,0 +1,34 @@
|
||||
#!/usr/bin/env node
|
||||
const fs = require('fs');
|
||||
|
||||
function generate() {
|
||||
try {
|
||||
const input = fs.readFileSync(0, 'utf8');
|
||||
if (!input || input.trim() === '') {
|
||||
process.stderr.write('No input received on stdin
|
||||
');
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const releases = JSON.parse(input);
|
||||
const lines = ['# Changelog', '', 'All notable changes to this project.', ''];
|
||||
|
||||
releases.slice(0, 50).forEach(r => {
|
||||
const date = r.published_at.split('T')[0];
|
||||
lines.push(`## [${r.tag_name}] - ${date}`);
|
||||
lines.push('');
|
||||
if (r.body) lines.push(r.body.trim());
|
||||
lines.push('');
|
||||
});
|
||||
|
||||
process.stdout.write(lines.join('
|
||||
') + '
|
||||
');
|
||||
} catch (err) {
|
||||
process.stderr.write(`Error generating changelog: ${err.message}
|
||||
`);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
generate();
|
||||
Reference in New Issue
Block a user