feat: initial release of handover plugin for Claude Code

- /handover skill: creates living document preserving context across sessions
- /takeover skill: recovers from forgotten handovers by reading old transcripts
- Statusline: shows context usage % and topic segmentation
- Background daemon uses Haiku for cheap conversation analysis
- Tracks suppressed issues, learnings, dead ends, open questions
- Auto WIP commits to prevent accidental reverts
This commit is contained in:
2026-02-05 11:10:54 +01:00
commit cec8f1ca7d
9 changed files with 841 additions and 0 deletions

10
.gitignore vendored Normal file
View File

@@ -0,0 +1,10 @@
# Cache files
segment_cache.json
debug_input.json
current_session*.txt
# Node
node_modules/
# OS
.DS_Store

235
README.md Normal file
View File

@@ -0,0 +1,235 @@
# Handover
A Claude Code plugin for preserving context, learnings, and state across agent sessions. When you're running low on context (~80% used), invoke `/handover` to create a living document that the next agent can pick up seamlessly.
## Features
- **Living handover document** (`.claude/handover.md`) that accumulates learnings across generations
- **Context-aware statusline** showing usage percentage and topic segmentation
- **Suppressed issue tracking** — catches when agents hide problems instead of fixing them
- **Task preservation** — captures in-flight tasks before session ends
- **WIP commits** — automatically commits current state so next agent can't accidentally revert
- **Takeover skill** — for when you forgot to handover and need to recover from a closed session
## Screenshots
### Statusline with context segmentation
![Statusline showing context usage and topic segments](assets/statusbar1.png)
![Statusline with different segments](assets/statusbar2.png)
### Handover document
![Handover skill generating the living document](assets/handover.png)
## Installation
### 1. Copy the skill to your project
```bash
# In your project directory
mkdir -p .claude/skills/handover
curl -o .claude/skills/handover/SKILL.md https://raw.githubusercontent.com/marcj/handover/main/skills/handover/SKILL.md
# Optional: takeover skill for recovering from forgotten handovers
mkdir -p .claude/skills/takeover
curl -o .claude/skills/takeover/SKILL.md https://raw.githubusercontent.com/marcj/handover/main/skills/takeover/SKILL.md
```
Or for global installation (all projects):
```bash
mkdir -p ~/.claude/skills/handover ~/.claude/skills/takeover
curl -o ~/.claude/skills/handover/SKILL.md https://raw.githubusercontent.com/marcj/handover/main/skills/handover/SKILL.md
curl -o ~/.claude/skills/takeover/SKILL.md https://raw.githubusercontent.com/marcj/handover/main/skills/takeover/SKILL.md
```
### 2. Install the statusline (optional but recommended)
The statusline shows context usage and topic segmentation at a glance.
```bash
# Create statusline directory
mkdir -p ~/.claude/statusline
# Download the scripts
curl -o ~/.claude/statusline/ctx_monitor.js https://raw.githubusercontent.com/marcj/handover/main/statusline/ctx_monitor.js
curl -o ~/.claude/statusline/ctx_daemon.js https://raw.githubusercontent.com/marcj/handover/main/statusline/ctx_daemon.js
# Make executable
chmod +x ~/.claude/statusline/ctx_monitor.js ~/.claude/statusline/ctx_daemon.js
# Add to Claude Code settings
cat ~/.claude/settings.json | jq '. + {"statusLine": {"type": "command", "command": "~/.claude/statusline/ctx_monitor.js"}}' > /tmp/settings.json && mv /tmp/settings.json ~/.claude/settings.json
```
Or manually add to `~/.claude/settings.json`:
```json
{
"statusLine": {
"type": "command",
"command": "~/.claude/statusline/ctx_monitor.js"
}
}
```
## Usage
### Handover (end of session)
When you're running low on context or ending a work session:
```
/handover
```
Or naturally mention it:
- "We need to handover"
- "Let's handover to the next agent"
- "You keep making the same mistake, we need to handover to a fresh agent"
The agent will:
1. Gather context from the conversation
2. Scan for suppressed issues (tests skipped, code commented out, etc.)
3. Capture active tasks
4. Propose learnings extracted from the session
5. Curate existing knowledge (verify what's still relevant)
6. Write/update `.claude/handover.md`
7. Create a WIP commit
### Takeover (new session, forgot to handover)
If you closed a session without running `/handover`:
```
/takeover
```
This will:
1. Find recent session transcripts
2. Let you pick which session to recover
3. Run the handover protocol on that closed session
4. Create the handover document retroactively
### Starting a new session
The next agent should begin with:
```
Read .claude/handover.md and continue the work
```
The handover document includes an init checklist the agent must follow.
## Handover Document Structure
`.claude/handover.md` contains:
- **Init Checklist** — steps the next agent must complete before starting
- **Architecture Snapshot** — high-level project structure (evolves slowly)
- **Current State** — branch, dirty files, failing tests, verification command
- **Next Steps** — prioritized task list
- **Alignment Check** — goal, scope boundary, plan file reference
- **Tasks** — captured from Claude's task list
- **Learnings** — accumulated knowledge (⚠️ unverified, ✅ verified)
- **Dead Ends** — approaches that didn't work
- **Suppressed Issues** — problems hidden instead of fixed
- **Open Questions** — unresolved technical decisions
- **Generation Log** — audit trail of all handovers
## Statusline
The statusline shows:
```
Opus 4.5 │ 82.0% used (164k) ▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒▒░░░░ ● handover skill ● statusline ● free
```
- Model name
- Context usage percentage and token count
- Visual bar with topic segments
- Legend showing what each segment represents
The segmentation is powered by a background daemon that calls **Claude Haiku** (the cheapest/fastest model) via `claude --print` to analyze the conversation topics. It only re-analyzes when context increases by 10%+ and at least 60 seconds have passed, keeping costs minimal.
## How It Works
### Knowledge Preservation
The handover document is a **living document** — it's not just a snapshot, it accumulates knowledge across agent generations:
- **Learnings** are appended with each handover, marked as unverified (⚠️)
- Later agents can verify learnings against the codebase and mark them ✅
- Stale entries are pruned when the agent confirms they no longer apply
### Suppressed Issue Detection
Agents sometimes "solve" problems by hiding them:
- Removing tests from whitelists
- Adding `.skip` to failing tests
- Commenting out problematic code
- Adding workarounds with "doesn't support yet" comments
The handover skill scans for these patterns and tracks them in the **Suppressed Issues** section. These can only be removed when the underlying issue is actually fixed.
### No Interruptions
The handover process is designed to be fast and non-intrusive:
- Agent extracts information from conversation context (no re-reading files)
- Agent proposes learnings instead of asking you to recall
- Agent curates existing entries by checking the codebase, not asking you
- Only asks for human judgment where needed (direction, scope)
## Configuration
### Segmentation Thresholds
In `ctx_daemon.js`:
```javascript
const MIN_INTERVAL_MS = 60 * 1000; // 60 seconds between analyses
const MIN_PCT_DELTA = 10; // 10% context increase before re-analyze
```
### Daemon Idle Timeout
The daemon automatically shuts down after 10 minutes of inactivity:
```javascript
const IDLE_TIMEOUT_MS = 10 * 60 * 1000;
```
## Troubleshooting
### Statusline shows wrong segments
The daemon caches segments by session ID. If you see segments from another session:
```bash
rm ~/.claude/statusline/segment_cache.json
pkill -f ctx_daemon.js
```
### Daemon not starting
Check if it's running:
```bash
ps aux | grep ctx_daemon
```
The statusline auto-starts the daemon on first refresh. Check for errors:
```bash
node ~/.claude/statusline/ctx_daemon.js
```
### Context percentage doesn't match /context
The statusline uses Claude's provided `used_percentage` from the input data. If there's a mismatch, check `~/.claude/statusline/debug_input.json` (if debug logging is enabled).
## License
MIT

BIN
assets/handover.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 463 KiB

BIN
assets/statusbar1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 74 KiB

BIN
assets/statusbar2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

177
skills/handover/SKILL.md Normal file
View File

@@ -0,0 +1,177 @@
---
name: handover
description: Generate a handover prompt for the next worker when ending a work session. Use this when the user says "handover", "next worker", or wants to save context for continuing later.
user-invocable: true
---
# Handover Skill
Curate the living handover document (`.claude/handover.md`) and commit WIP state so the next agent generation can continue seamlessly. The handover document is institutional memory — it accumulates learnings across sessions and is pruned each handover to prevent rot.
## Phase 1: Gather Context (automated)
Do all of these silently, no user interaction yet. **Do NOT run git commands** — you have the full conversation context and know what happened this session. The next agent can run `git log` themselves.
1. Read `.claude/handover.md` if it exists (this is the living document from previous generations)
2. Check if a plan file is active in the current session. If so, note its path
3. Run `TaskList` to capture all active tasks (pending and in-progress). These are session-local and will be lost when the session ends — they MUST be preserved in the handover document.
4. **Scan for suppressed issues** in the conversation context. These are issues that were avoided rather than fixed. Look for:
- Tests removed from whitelists or skip-lists
- Tests marked with `.skip`, `xit`, `xdescribe`, `@disabled`, or similar
- Test files deleted or excluded
- Code commented out with TODO/FIXME/HACK annotations
- Error handling that catches and silences instead of fixing
- Workarounds with comments like "doesn't support this yet", "crashes the compiler"
- Features or cases explicitly excluded ("let me remove this", "skip this for now")
This is critical — suppressed issues are the easiest things to lose permanently.
5. From conversation context, identify:
- What was worked on this session
- Which files were modified and why
- Current branch name
- Any test failures encountered (do NOT re-run tests)
- Any discoveries, gotchas, or dead ends
- Any unresolved technical decisions
## Phase 2: Curate Knowledge (automated)
If `.claude/handover.md` exists with entries, curate them **yourself** — don't ask the user. You have access to the codebase; they don't remember every implementation detail.
For each section with entries, verify relevance by checking the code:
- **Learnings**: Check if the code/pattern described still exists. If refactored away or behavior changed, it's stale.
- **Dead Ends**: Check if the constraint that caused the dead end still applies. If the blocker was removed, it's stale.
- **Open Questions**: Check if the code now reflects a clear decision. If so, it's resolved.
- **Suppressed Issues**: Check if the suppression (skipped test, commented code, etc.) is still present. If the issue was properly fixed and the suppression removed, it can be retired.
When a learning is confirmed still true, upgrade it from ⚠️ to ✅.
**No user interaction.** Just do the curation and move on.
## Phase 3: Write the Handover Document
Write/update `.claude/handover.md` with this structure:
```markdown
# Handover
## Init Checklist
<!-- Next agent: complete these steps IN ORDER before doing anything else. -->
1. Read this entire handover document
2. Read `CLAUDE.md` for project rules
3. Read the active plan file (if listed below) — this is the original vision
4. Run the verification command (below) to confirm expected state
5. Recreate tasks from the Tasks section via `TaskCreate`
6. Verify any unverified learnings you plan to rely on (marked with ⚠️)
7. Check the Suppressed Issues section — do not re-suppress these without noting it
## Architecture Snapshot
<!-- Slowly evolving — only update when structure actually changes. Max 15 lines. -->
<!-- Format: indented tree for structure, arrow chains for data flow, one-line annotations for key constraints. -->
{agent generates this on Gen 1 from CLAUDE.md and codebase knowledge, then only updates when architecture changes}
## Current State
<!-- Overwritten each handover -->
- **Branch**: {current branch name}
- **Active plan**: {path to plan file, or "none"}
- **Working on**: {one-line summary}
- **Dirty files**:
- `path/to/file.ts` — {one-line annotation of what changed}
- ...
- **Failing tests**: {test names + failure reason, or "none known"}
- **Environment notes**: {docker state, rebuilt packages, etc., or "clean"}
- **Verification command**: {single command next agent should run first to confirm state}
## Next Steps
<!-- Overwritten each handover. This is your immediate roadmap — check back here if you feel yourself drifting. -->
1. {Highest priority item with file locations}
2. ...
3. ...
## Alignment Check
<!-- Read this when you're mid-session and unsure if you're on track. -->
- **Goal**: {one-sentence description of what this work is trying to achieve}
- **Scope boundary**: {what is NOT in scope — things to avoid getting pulled into}
- **Plan file**: {path, or "none"} — if you're deviating from the plan, stop and tell the user
## Tasks
<!-- Overwritten each handover. Next agent: recreate these via TaskCreate on init. -->
- [ ] {task subject} — {description} [status: pending/in_progress]
- ...
## Learnings
<!-- Append new, remove agent-verified stale. Each entry dated. -->
<!-- ⚠️ = unverified (written by previous agent, not yet confirmed by a later agent) -->
<!-- ✅ = verified (a later agent checked and confirmed this is still true) -->
- ⚠️ [{date}] {learning}
## Dead Ends
<!-- Append new, remove agent-verified stale. Each entry dated. -->
- [{date}] Tried {approach} for {goal} — failed because {reason}
## Suppressed Issues
<!-- Append new, remove when properly fixed. Each entry has file location. -->
- [{date}] `{file:line}` — {what was suppressed and why}
## Open Questions
<!-- Append new, remove when resolved. Each entry dated. -->
- [{date}] {question}
## Generation Log
<!-- One line per handover, never pruned -->
- [{date}] Gen {N}: {one-line summary of what this session did}
```
Rules:
- **Init Checklist**: static, never changes (it's instructions for the next agent)
- **Architecture Snapshot**: slowly evolving. Generate on Gen 1 from CLAUDE.md and what you know. On subsequent handovers, only update if architecture actually changed this session (new modules, renamed paths, new data flows). If nothing changed, leave it untouched. Max 15 lines. Use this format:
- Indented tree for directory/module structure (only top 2 levels, annotate each line)
- Arrow chains for key data flows (`input → transform → output`)
- One-line notes for key constraints or invariants
- **Current State**, **Next Steps**, **Alignment Check**, and **Tasks**: always overwritten completely
- **Learnings**: append new entries with ⚠️ (unverified). During curation (Phase 2), when the agent confirms a learning is still true, upgrade it to ✅. Soft cap ~15 items
- **Dead Ends**: append new, remove stale. Soft cap ~10 items
- **Suppressed Issues**: append new, remove ONLY when the issue is properly fixed (not just suppressed again). Soft cap ~10 items. Include `file:line` so the next agent can find it
- **Open Questions**: append new, remove resolved. Soft cap ~5 items
- **Generation Log**: append only, never prune. This is the audit trail
- Use date format `YYYY-MM-DD`
## Phase 4: WIP Commit
1. Stage all dirty working files AND `.claude/handover.md`
2. Create a commit with this format:
```
wip: handover — {one-line summary of current state}
Next steps:
- {step 1}
- {step 2}
- {step 3}
Handover document: .claude/handover.md
```
3. Do NOT push. The user decides when to push.
Important: Use `git add` with specific file paths. Do NOT use `git add -A` or `git add .`.
## Phase 5: Output Summary
After committing, output:
1. The full contents of the updated `.claude/handover.md`
2. The commit hash
3. A reminder: **"Next session: open `.claude/handover.md` and continue the work"**
4. If tasks were captured: **"The next agent should recreate the Tasks section via `TaskCreate` on init."**
## Size Management
If any section exceeds its soft cap, prune the oldest entries that are no longer relevant. Use your judgment — you can verify against the codebase.
The goal is to keep the document scannable. A 200-line handover document defeats the purpose.
## First-Time Use
If `.claude/handover.md` does not exist yet:
- Skip Phase 2 (no existing entries to curate)
- Create the file fresh with all sections
- Generation Log starts at "Gen 1"

94
skills/takeover/SKILL.md Normal file
View File

@@ -0,0 +1,94 @@
---
name: takeover
description: Recover from a forgotten handover by analyzing a closed session's transcript. Use when you start a new session and realize the previous one didn't run /handover.
user-invocable: true
---
# Takeover Skill
Recover context from a previous session that ended without running `/handover`. This skill reads the transcript from a closed session and creates the handover document retroactively.
## When to Use
- You started a new session and realized the last one didn't handover
- You want to recover learnings from an old session
- The previous agent crashed or was terminated unexpectedly
## Instructions
When this skill is invoked:
### Phase 1: Find Recent Sessions
1. List recent session transcripts:
```bash
ls -lt ~/.claude/projects/*//*.jsonl | head -20
```
2. Show the user a list of recent sessions with:
- Session ID (first 8 chars)
- Project path
- Last modified time
- File size (rough indicator of session length)
3. Ask the user which session to recover:
- "Which session would you like to recover? (enter number or session ID prefix)"
- Default to the most recent one that's NOT the current session
### Phase 2: Analyze the Transcript
1. Read the selected transcript file (the .jsonl file)
2. Extract all messages (both user and assistant) and parse them:
```javascript
// Each line is a JSON object
// Look for j.message.role === "user" or "assistant"
// Extract text content from j.message.content
```
3. From the transcript, identify:
- What was being worked on
- Which files were modified (look for Edit/Write tool calls)
- Any test failures mentioned
- Any learnings or discoveries
- Any suppressed issues (tests skipped, code commented out)
- Any unresolved questions or decisions
- Active tasks (look for TaskCreate/TaskUpdate calls)
### Phase 3: Generate Handover Document
Follow the same structure as the regular handover skill, but note that this is a **retroactive takeover**:
1. Read existing `.claude/handover.md` if it exists
2. Create/update the handover document with:
- All the standard sections (see handover skill)
- Mark this as a takeover in the Generation Log:
`[date] Gen N (takeover): recovered from session XXXXXXXX — {summary}`
3. **Do NOT create a WIP commit** — the files may have changed since that session
### Phase 4: Output Summary
Show the user:
1. What was recovered from the old session
2. The updated `.claude/handover.md` content
3. Any warnings (e.g., "Files may have changed since that session")
4. Reminder: "Review the handover document and verify the state is accurate"
## Notes
- This skill reads transcript files directly, it doesn't have the live conversation context
- The recovered information may be incomplete compared to a live handover
- Always verify the recovered state against the current codebase
- If the project has changed significantly since the old session, some learnings may be stale
## Transcript File Location
Session transcripts are stored at:
```
~/.claude/projects/{project-path-hash}/{session-id}.jsonl
```
The project path hash is the project directory with slashes replaced by dashes, e.g.:
`-Users-marc-bude-deepkit-framework`

178
statusline/ctx_daemon.js Executable file
View File

@@ -0,0 +1,178 @@
#!/usr/bin/env node
"use strict";
const http = require("http");
const fs = require("fs");
const { spawn } = require("child_process");
const path = require("path");
const PORT = 47523;
const CACHE_FILE = path.join(__dirname, "segment_cache.json");
const IDLE_TIMEOUT_MS = 10 * 60 * 1000; // 10 minutes
const MIN_INTERVAL_MS = 60 * 1000; // 60 seconds between analyses
const MIN_PCT_DELTA = 10; // 10% context increase before re-analyze
let cache = loadCache();
let lastRequestTime = Date.now();
let analysisInFlight = false;
function loadCache() {
try {
return JSON.parse(fs.readFileSync(CACHE_FILE, "utf8"));
} catch {
return {};
}
}
function saveCache() {
try {
fs.writeFileSync(CACHE_FILE, JSON.stringify(cache, null, 2));
} catch (e) {
console.error("Failed to save cache:", e.message);
}
}
function shouldAnalyze(sessionId, pct) {
if (analysisInFlight) return false;
const c = cache[sessionId];
if (!c) return true;
const pctDelta = pct - (c.lastPct || 0);
const timeDelta = Date.now() - (c.lastTime || 0);
return pctDelta >= MIN_PCT_DELTA && timeDelta >= MIN_INTERVAL_MS;
}
function runAnalysis(sessionId, transcriptPath, pct) {
if (analysisInFlight) return;
analysisInFlight = true;
// Read transcript and extract conversation
let transcript = "";
try {
const lines = fs.readFileSync(transcriptPath, "utf8").split(/\r?\n/).filter(Boolean);
const messages = [];
for (const line of lines) {
try {
const j = JSON.parse(line);
const role = j.message?.role;
if (role === "user" || role === "assistant") {
const content = j.message?.content;
let text = "";
if (typeof content === "string") {
text = content;
} else if (Array.isArray(content)) {
text = content
.filter(c => c.type === "text")
.map(c => c.text)
.join(" ");
}
// Truncate very long messages but keep enough context
if (text.length > 500) {
text = text.slice(0, 500) + "...";
}
if (text.trim()) {
const prefix = role === "user" ? "USER" : "ASST";
messages.push(`[${prefix}] ${text.trim()}`);
}
}
} catch {}
}
transcript = messages.join("\n\n");
} catch (e) {
console.error("Failed to read transcript:", e.message);
analysisInFlight = false;
return;
}
const prompt = `Analyze this conversation and identify 3-6 distinct topic segments in chronological order. Be specific about what was discussed - don't merge different topics together.
Output ONLY a single line in this exact format (no explanation):
topic1 XX%|topic2 XX%|topic3 XX%|...|free XX%
Rules:
- 3-6 segments (more segments = more detail, but don't over-split)
- Each topic name is 1-4 words, be specific (e.g. "statusline daemon" not just "code")
- Percentages must sum to 100 and roughly reflect how much of the conversation each topic took
- "free" is remaining unused context = approximately ${100 - Math.round(pct)}%
- Order chronologically (first topic discussed first)
User messages:
${transcript}`;
const child = spawn("claude", ["--print", "-p", prompt], {
stdio: ["ignore", "pipe", "pipe"],
detached: true,
});
let stdout = "";
child.stdout.on("data", (d) => stdout += d.toString());
child.stderr.on("data", (d) => console.error("claude stderr:", d.toString()));
child.on("close", (code) => {
analysisInFlight = false;
if (code !== 0) {
console.error("claude exited with code", code);
return;
}
// Parse response: "topic1 XX%|topic2 XX%|free XX%"
const line = stdout.trim().split("\n").pop() || "";
const segments = line.split("|").map(s => {
const match = s.trim().match(/^(.+?)\s+(\d+)%$/);
if (match) return { name: match[1].trim(), pct: parseInt(match[2], 10) };
return null;
}).filter(Boolean);
if (segments.length > 0) {
cache[sessionId] = {
lastPct: pct,
lastTime: Date.now(),
segments,
};
saveCache();
}
});
}
const server = http.createServer((req, res) => {
lastRequestTime = Date.now();
const url = new URL(req.url, `http://localhost:${PORT}`);
if (url.pathname !== "/segments") {
res.writeHead(404);
res.end();
return;
}
const sessionId = url.searchParams.get("session") || "unknown";
const pct = parseFloat(url.searchParams.get("pct") || "0");
const transcriptPath = url.searchParams.get("transcript") || "";
// Check if we should start a new analysis
if (shouldAnalyze(sessionId, pct) && transcriptPath) {
runAnalysis(sessionId, transcriptPath, pct);
}
// Return cached segments (or empty if none yet)
const c = cache[sessionId];
res.writeHead(200, { "Content-Type": "application/json" });
res.end(JSON.stringify({
segments: c?.segments || [],
pending: analysisInFlight,
}));
});
server.listen(PORT, "127.0.0.1", () => {
console.log(`ctx_daemon listening on port ${PORT}`);
});
// Idle timeout check
setInterval(() => {
if (Date.now() - lastRequestTime > IDLE_TIMEOUT_MS) {
console.log("Idle timeout reached, shutting down");
process.exit(0);
}
}, 60000);
// Handle graceful shutdown
process.on("SIGTERM", () => process.exit(0));
process.on("SIGINT", () => process.exit(0));

147
statusline/ctx_monitor.js Executable file
View File

@@ -0,0 +1,147 @@
#!/usr/bin/env node
"use strict";
const fs = require("fs");
const http = require("http");
const { spawn } = require("child_process");
const path = require("path");
const input = readJSON(0);
const transcript = input.transcript_path;
const sessionId = input.session_id || "unknown";
const model = input.model?.display_name ?? "Claude";
const CONTEXT_WINDOW = input.context_window?.context_window_size || 200_000;
// Claude provides these directly - use them if available
const providedUsedPct = input.context_window?.used_percentage;
const providedTokens = input.context_window?.total_input_tokens;
const DAEMON_PORT = 47523;
const DAEMON_SCRIPT = path.join(__dirname, "ctx_daemon.js");
function readJSON(fd) {
try { return JSON.parse(fs.readFileSync(fd, "utf8")); } catch { return {}; }
}
function color(p) {
if (p >= 90) return "\x1b[31m"; // red
if (p >= 70) return "\x1b[33m"; // yellow
return "\x1b[32m"; // green
}
function segmentColor(i, isFree) {
if (isFree) return "\x1b[90m"; // gray for free
const colors = ["\x1b[36m", "\x1b[35m", "\x1b[34m", "\x1b[33m"]; // cyan, magenta, blue, yellow
return colors[i % colors.length];
}
function usedTotal(u) {
return (u?.input_tokens ?? 0) + (u?.output_tokens ?? 0) +
(u?.cache_read_input_tokens ?? 0) + (u?.cache_creation_input_tokens ?? 0);
}
function getUsage() {
if (!transcript) return null;
let lines;
try { lines = fs.readFileSync(transcript, "utf8").split(/\r?\n/); } catch { return null; }
for (let i = lines.length - 1; i >= 0; i--) {
const line = lines[i].trim();
if (!line) continue;
let j;
try { j = JSON.parse(line); } catch { continue; }
const u = j.message?.usage;
if (j.isSidechain || j.isApiErrorMessage || !u || usedTotal(u) === 0) continue;
if (j.message?.role !== "assistant") continue;
const m = String(j.message?.model ?? "").toLowerCase();
if (m === "<synthetic>" || m.includes("synthetic")) continue;
return u;
}
return null;
}
function startDaemon() {
const child = spawn("node", [DAEMON_SCRIPT], {
detached: true,
stdio: "ignore",
});
child.unref();
}
function fetchSegments(pct, callback) {
const url = `http://127.0.0.1:${DAEMON_PORT}/segments?session=${encodeURIComponent(sessionId)}&pct=${pct}&transcript=${encodeURIComponent(transcript || "")}`;
const req = http.get(url, { timeout: 80 }, (res) => {
let data = "";
res.on("data", (chunk) => data += chunk);
res.on("end", () => {
try {
callback(null, JSON.parse(data));
} catch {
callback(null, { segments: [] });
}
});
});
req.on("error", (e) => {
if (e.code === "ECONNREFUSED") {
startDaemon();
}
callback(null, { segments: [] });
});
req.on("timeout", () => {
req.destroy();
callback(null, { segments: [] });
});
}
function formatSegments(segments) {
if (!segments || segments.length === 0) return "";
const BAR_WIDTH = 20;
// Build the colored bar
let bar = "";
let legend = [];
segments.forEach((s, i) => {
const isFree = s.name.toLowerCase() === "free";
const c = segmentColor(i, isFree);
const width = Math.max(1, Math.round((s.pct / 100) * BAR_WIDTH));
bar += `${c}${"▒".repeat(width)}\x1b[0m`;
// Legend: colored dot + name (truncate only if very long)
const shortName = s.name.length > 15 ? s.name.slice(0, 14) + "…" : s.name;
legend.push(`${c}${shortName}\x1b[0m`);
});
// Pad bar to fixed width if needed
const barLen = segments.reduce((sum, s) => sum + Math.max(1, Math.round((s.pct / 100) * BAR_WIDTH)), 0);
if (barLen < BAR_WIDTH) {
bar += "\x1b[90m░\x1b[0m".repeat(BAR_WIDTH - barLen);
}
return ` ${bar} ${legend.join(" ")}`;
}
// Main
const usage = getUsage();
if (!usage) {
// No usage yet - show 0% and note it's a fresh session
console.log(`\x1b[95m${model}\x1b[0m \x1b[90m│\x1b[0m \x1b[32m0% used\x1b[0m \x1b[90m(new session)\x1b[0m`);
process.exit(0);
}
// Prefer Claude's provided percentage, fall back to calculating from usage
const pct = providedUsedPct ?? Math.round((usedTotal(usage) * 1000) / CONTEXT_WINDOW) / 10;
// Calculate total tokens as Claude does: input + output roughly
const totalTokens = (input.context_window?.total_input_tokens || 0) + (input.context_window?.total_output_tokens || 0);
// But actually use percentage * context_window for display to match /context
const usedTokens = Math.round((pct / 100) * CONTEXT_WINDOW);
const k = (n) => n >= 1000 ? (n / 1000).toFixed(0) + "k" : n;
// Try to get segments from daemon (with tiny timeout)
fetchSegments(pct, (err, result) => {
const segmentStr = formatSegments(result?.segments);
console.log(`\x1b[95m${model}\x1b[0m \x1b[90m│\x1b[0m ${color(pct)}${pct.toFixed(1)}% used\x1b[0m \x1b[90m(${k(usedTokens)})\x1b[0m${segmentStr}`);
});