Remove internal-only bundled skills and mock helpers (#376)

* Remove internal-only bundled skills and mock rate-limit behavior

This takes the next planned Phase C-lite slice by deleting bundled skills
that only ever registered for internal users and replacing the internal
mock rate-limit helper with a stable no-op external stub. The external
build keeps the same behavior while removing a concentrated block of
USER_TYPE-gated dead code.

Constraint: Limit this PR to isolated internal-only helpers and avoid bridge, oauth, or rebrand behavior
Rejected: Broad USER_TYPE cleanup across mixed runtime surfaces | too risky for the next medium-sized PR
Confidence: high
Scope-risk: moderate
Reversibility: clean
Directive: The next cleanup pass should continue with similarly isolated USER_TYPE helpers before touching main.tsx or protocol-heavy code
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy)

* Align internal-only helper removal with remaining user guidance

This follow-up fixes the mock billing stub to be a true no-op and removes
stale user-facing references to /verify and /skillify from the same PR.
It also leaves a clearer paper trail for review: the deleted verify skill
was explicitly ant-gated before removal, and the remaining mock helper
callers still resolve to safe no-op returns in the external build.

Constraint: Keep the PR focused on consistency fixes and reviewer-requested evidence, not new cleanup scope
Rejected: Leave stale guidance for a later PR | would make this branch internally inconsistent after skill removal
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When deleting gated features, always sweep user guidance and coordinator prompts in the same pass
Tested: bun run build
Tested: bun run smoke
Tested: bun run verify:privacy
Tested: bun run test:provider
Tested: bun run test:provider-recommendation
Not-tested: Full repo typecheck (upstream baseline remains noisy; changed-file scan still shows only pre-existing tipRegistry errors outside edited lines)

* Clarify generic workflow wording after skill removal

This removes the last generic verification-skill wording that could still
be read as pointing at a deleted bundled command. The guidance now talks
about project workflows rather than a specific bundled verify skill.

Constraint: Keep the follow-up limited to reviewer-facing wording cleanup on the same PR
Rejected: Leave generic wording as-is | still too easy to misread after the explicit /verify references were removed
Confidence: high
Scope-risk: narrow
Reversibility: clean
Directive: When removing bundled commands, scrub both explicit and generic references in the same branch
Tested: bun run build
Tested: bun run smoke
Not-tested: Additional checks unchanged by wording-only follow-up

---------

Co-authored-by: anandh8x <test@example.com>
This commit is contained in:
Anandan
2026-04-05 10:14:21 +05:30
committed by GitHub
parent 5ff34283c4
commit daa3aa27a0
9 changed files with 35 additions and 1334 deletions

View File

@@ -70,7 +70,7 @@ If the user chose personal CLAUDE.local.md or both: ask about them, not the code
- Only if Phase 2 found multiple git worktrees: ask whether their worktrees are nested inside the main repo (e.g., \`.claude/worktrees/<name>/\`) or siblings/external (e.g., \`../myrepo-feature/\`). If nested, the upward file walk finds the main repo's CLAUDE.local.md automatically — no special handling needed. If sibling/external, the personal content should live in a home-directory file (e.g., \`~/.claude/<project-name>-instructions.md\`) and each worktree gets a one-line CLAUDE.local.md stub that imports it: \`@~/.claude/<project-name>-instructions.md\`. Never put this import in the project CLAUDE.md — that would check a personal reference into the team-shared file.
- Any communication preferences? (e.g., "be terse", "always explain tradeoffs", "don't summarize at the end")
**Synthesize a proposal from Phase 2 findings** — e.g., format-on-edit if a formatter exists, a \`/verify\` skill if tests exist, a CLAUDE.md note for anything from the gap-fill answers that's a guideline rather than a workflow. For each, pick the artifact type that fits, **constrained by the Phase 1 skills+hooks choice**:
**Synthesize a proposal from Phase 2 findings** — e.g., format-on-edit if a formatter exists, a project verification workflow if tests exist, a CLAUDE.md note for anything from the gap-fill answers that's a guideline rather than a workflow. For each, pick the artifact type that fits, **constrained by the Phase 1 skills+hooks choice**:
- **Hook** (stricter) — deterministic shell command on a tool event; Claude can't skip it. Fits mechanical, fast, per-edit steps: formatting, linting, running a quick test on the changed file.
- **Skill** (on-demand) — you or Claude invoke \`/skill-name\` when you want it. Fits workflows that don't belong on every edit: deep verification, session reports, deploys.
@@ -85,7 +85,7 @@ If the user chose personal CLAUDE.local.md or both: ask about them, not the code
- **Keep previews compact — the preview box truncates with no scrolling.** One line per item, no blank lines between items, no header. Example preview content:
• **Format-on-edit hook** (automatic) — \`ruff format <file>\` via PostToolUse
• **/verify skill** (on-demand) — \`make lint && make typecheck && make test\`
• **Verification workflow** (on-demand) — \`make lint && make typecheck && make test\`
• **CLAUDE.md note** (guideline) — "run lint/typecheck/test before marking done"
- Option labels stay short ("Looks good", "Drop the hook", "Drop the skill") — the tool auto-adds an "Other" free-text option, so don't add your own catch-all.
@@ -157,7 +157,7 @@ Skills add capabilities Claude can use on demand without bloating every session.
**First, consume \`skill\` entries from the Phase 3 preference queue.** Each queued skill preference becomes a SKILL.md tailored to what the user described. For each:
- Name it from the preference (e.g., "verify-deep", "session-report", "deploy-sandbox")
- Write the body using the user's own words from the interview plus whatever Phase 2 found (test commands, report format, deploy target). If the preference maps to an existing bundled skill (e.g., \`/verify\`), write a project skill that adds the user's specific constraints on top — tell the user the bundled one still exists and theirs is additive.
- Write the body using the user's own words from the interview plus whatever Phase 2 found (test commands, report format, deploy target). If the preference maps to an existing project workflow, write a project skill that captures the user's specific constraints on top.
- Ask a quick follow-up if the preference is underspecified (e.g., "which test command should verify-deep run?")
**Then suggest additional skills** beyond the queue when you find:

View File

@@ -111,7 +111,7 @@ export function getCoordinatorUserContext(
export function getCoordinatorSystemPrompt(): string {
const workerCapabilities = isEnvTruthy(process.env.CLAUDE_CODE_SIMPLE)
? 'Workers have access to Bash, Read, and Edit tools, plus MCP tools from configured MCP servers.'
: 'Workers have access to standard tools, MCP tools from configured MCP servers, and project skills via the Skill tool. Delegate skill invocations (e.g. /commit, /verify) to workers.'
: 'Workers have access to standard tools, MCP tools from configured MCP servers, and project skills via the Skill tool. Delegate skill invocations (e.g. /commit or project workflow skills) to workers.'
return `You are Claude Code, an AI assistant that orchestrates software engineering tasks across multiple workers.

View File

@@ -1,14 +1,12 @@
// Mock rate limits for testing [internal-only]
// This allows testing various rate limit scenarios without hitting actual limits
//
// ⚠️ WARNING: This is for internal testing/demo purposes only!
// The mock headers may not exactly match the API specification or real-world behavior.
// Always validate against actual API responses before relying on this for production features.
// The external build keeps this module as a stable no-op surface so imports
// remain valid without exposing internal-only rate-limit simulation behavior.
import type { SubscriptionType } from '../services/oauth/types.js'
import { setMockBillingAccessOverride } from '../utils/billing.js'
import type { OverageDisabledReason } from './claudeAiLimits.js'
type SubscriptionType = string
type MockHeaders = {
'anthropic-ratelimit-unified-status'?:
| 'allowed'
@@ -29,7 +27,6 @@ type MockHeaders = {
'anthropic-ratelimit-unified-fallback'?: 'available'
'anthropic-ratelimit-unified-fallback-percentage'?: string
'retry-after'?: string
// Early warning utilization headers
'anthropic-ratelimit-unified-5h-utilization'?: string
'anthropic-ratelimit-unified-5h-reset'?: string
'anthropic-ratelimit-unified-5h-surpassed-threshold'?: string
@@ -79,679 +76,53 @@ export type MockScenario =
| 'extra-usage-required'
| 'clear'
let mockHeaders: MockHeaders = {}
let mockEnabled = false
let mockHeaderless429Message: string | null = null
let mockSubscriptionType: SubscriptionType | null = null
let mockFastModeRateLimitDurationMs: number | null = null
let mockFastModeRateLimitExpiresAt: number | null = null
// Default subscription type for mock testing
const DEFAULT_MOCK_SUBSCRIPTION: SubscriptionType = 'max'
// Track individual exceeded limits with their reset times
type ExceededLimit = {
type: 'five_hour' | 'seven_day' | 'seven_day_opus' | 'seven_day_sonnet'
resetsAt: number // Unix timestamp
}
let exceededLimits: ExceededLimit[] = []
// New approach: Toggle individual headers
export function setMockHeader(
key: MockHeaderKey,
value: string | undefined,
): void {
if (process.env.USER_TYPE !== 'ant') {
return
}
_key: MockHeaderKey,
_value: string | undefined,
): void {}
mockEnabled = true
// Special case for retry-after which doesn't have the prefix
const fullKey = (
key === 'retry-after' ? 'retry-after' : `anthropic-ratelimit-unified-${key}`
) as keyof MockHeaders
if (value === undefined || value === 'clear') {
delete mockHeaders[fullKey]
if (key === 'claim') {
exceededLimits = []
}
// Update retry-after if status changed
if (key === 'status' || key === 'overage-status') {
updateRetryAfter()
}
return
} else {
// Handle special cases for reset times
if (key === 'reset' || key === 'overage-reset') {
// If user provides a number, treat it as hours from now
const hours = Number(value)
if (!isNaN(hours)) {
value = String(Math.floor(Date.now() / 1000) + hours * 3600)
}
}
// Handle claims - add to exceeded limits
if (key === 'claim') {
const validClaims = [
'five_hour',
'seven_day',
'seven_day_opus',
'seven_day_sonnet',
]
if (validClaims.includes(value)) {
// Determine reset time based on claim type
let resetsAt: number
if (value === 'five_hour') {
resetsAt = Math.floor(Date.now() / 1000) + 5 * 3600
} else if (
value === 'seven_day' ||
value === 'seven_day_opus' ||
value === 'seven_day_sonnet'
) {
resetsAt = Math.floor(Date.now() / 1000) + 7 * 24 * 3600
} else {
resetsAt = Math.floor(Date.now() / 1000) + 3600
}
// Add to exceeded limits (remove if already exists)
exceededLimits = exceededLimits.filter(l => l.type !== value)
exceededLimits.push({ type: value as ExceededLimit['type'], resetsAt })
// Set the representative claim (furthest reset time)
updateRepresentativeClaim()
return
}
}
// Widen to a string-valued record so dynamic key assignment is allowed.
// MockHeaders values are string-literal unions; assigning a raw user-input
// string requires widening, but this is mock/test code so it's acceptable.
const headers: Partial<Record<keyof MockHeaders, string>> = mockHeaders
headers[fullKey] = value
// Update retry-after if status changed
if (key === 'status' || key === 'overage-status') {
updateRetryAfter()
}
}
// If all headers are cleared, disable mocking
if (Object.keys(mockHeaders).length === 0) {
mockEnabled = false
}
}
// Helper to update retry-after based on current state
function updateRetryAfter(): void {
const status = mockHeaders['anthropic-ratelimit-unified-status']
const overageStatus =
mockHeaders['anthropic-ratelimit-unified-overage-status']
const reset = mockHeaders['anthropic-ratelimit-unified-reset']
if (
status === 'rejected' &&
(!overageStatus || overageStatus === 'rejected') &&
reset
) {
// Calculate seconds until reset
const resetTimestamp = Number(reset)
const secondsUntilReset = Math.max(
0,
resetTimestamp - Math.floor(Date.now() / 1000),
)
mockHeaders['retry-after'] = String(secondsUntilReset)
} else {
delete mockHeaders['retry-after']
}
}
// Update the representative claim based on exceeded limits
function updateRepresentativeClaim(): void {
if (exceededLimits.length === 0) {
delete mockHeaders['anthropic-ratelimit-unified-representative-claim']
delete mockHeaders['anthropic-ratelimit-unified-reset']
delete mockHeaders['retry-after']
return
}
// Find the limit with the furthest reset time
const furthest = exceededLimits.reduce((prev, curr) =>
curr.resetsAt > prev.resetsAt ? curr : prev,
)
// Set the representative claim (appears for both warning and rejected)
mockHeaders['anthropic-ratelimit-unified-representative-claim'] =
furthest.type
mockHeaders['anthropic-ratelimit-unified-reset'] = String(furthest.resetsAt)
// Add retry-after if rejected and no overage available
if (mockHeaders['anthropic-ratelimit-unified-status'] === 'rejected') {
const overageStatus =
mockHeaders['anthropic-ratelimit-unified-overage-status']
if (!overageStatus || overageStatus === 'rejected') {
// Calculate seconds until reset
const secondsUntilReset = Math.max(
0,
furthest.resetsAt - Math.floor(Date.now() / 1000),
)
mockHeaders['retry-after'] = String(secondsUntilReset)
} else {
// Overage is available, no retry-after
delete mockHeaders['retry-after']
}
} else {
delete mockHeaders['retry-after']
}
}
// Add function to add exceeded limit with custom reset time
export function addExceededLimit(
type: 'five_hour' | 'seven_day' | 'seven_day_opus' | 'seven_day_sonnet',
hoursFromNow: number,
): void {
if (process.env.USER_TYPE !== 'ant') {
return
}
_type: 'five_hour' | 'seven_day' | 'seven_day_opus' | 'seven_day_sonnet',
_hoursFromNow: number,
): void {}
mockEnabled = true
const resetsAt = Math.floor(Date.now() / 1000) + hoursFromNow * 3600
// Remove existing limit of same type
exceededLimits = exceededLimits.filter(l => l.type !== type)
exceededLimits.push({ type, resetsAt })
// Update status to rejected if we have exceeded limits
if (exceededLimits.length > 0) {
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
}
updateRepresentativeClaim()
}
// Set mock early warning utilization for time-relative thresholds
// claimAbbrev: '5h' or '7d'
// utilization: 0-1 (e.g., 0.92 for 92% used)
// hoursFromNow: hours until reset (default: 4 for 5h, 120 for 7d)
export function setMockEarlyWarning(
claimAbbrev: '5h' | '7d' | 'overage',
utilization: number,
hoursFromNow?: number,
): void {
if (process.env.USER_TYPE !== 'ant') {
return
}
_claimAbbrev: '5h' | '7d' | 'overage',
_utilization: number,
_hoursFromNow?: number,
): void {}
mockEnabled = true
export function clearMockEarlyWarning(): void {}
// Clear ALL early warning headers first (5h is checked before 7d, so we need
// to clear 5h headers when testing 7d to avoid 5h taking priority)
clearMockEarlyWarning()
// Default hours based on claim type (early in window to trigger warning)
const defaultHours = claimAbbrev === '5h' ? 4 : 5 * 24
const hours = hoursFromNow ?? defaultHours
const resetsAt = Math.floor(Date.now() / 1000) + hours * 3600
mockHeaders[`anthropic-ratelimit-unified-${claimAbbrev}-utilization`] =
String(utilization)
mockHeaders[`anthropic-ratelimit-unified-${claimAbbrev}-reset`] =
String(resetsAt)
// Set the surpassed-threshold header to trigger early warning
mockHeaders[
`anthropic-ratelimit-unified-${claimAbbrev}-surpassed-threshold`
] = String(utilization)
// Set status to allowed so early warning logic can upgrade it
if (!mockHeaders['anthropic-ratelimit-unified-status']) {
mockHeaders['anthropic-ratelimit-unified-status'] = 'allowed'
}
}
// Clear mock early warning headers
export function clearMockEarlyWarning(): void {
delete mockHeaders['anthropic-ratelimit-unified-5h-utilization']
delete mockHeaders['anthropic-ratelimit-unified-5h-reset']
delete mockHeaders['anthropic-ratelimit-unified-5h-surpassed-threshold']
delete mockHeaders['anthropic-ratelimit-unified-7d-utilization']
delete mockHeaders['anthropic-ratelimit-unified-7d-reset']
delete mockHeaders['anthropic-ratelimit-unified-7d-surpassed-threshold']
}
export function setMockRateLimitScenario(scenario: MockScenario): void {
if (process.env.USER_TYPE !== 'ant') {
return
}
if (scenario === 'clear') {
mockHeaders = {}
mockHeaderless429Message = null
mockEnabled = false
return
}
mockEnabled = true
// Set reset times for demos
const fiveHoursFromNow = Math.floor(Date.now() / 1000) + 5 * 3600
const sevenDaysFromNow = Math.floor(Date.now() / 1000) + 7 * 24 * 3600
// Clear existing headers
mockHeaders = {}
mockHeaderless429Message = null
// Only clear exceeded limits for scenarios that explicitly set them
// Overage scenarios should preserve existing exceeded limits
const preserveExceededLimits = [
'overage-active',
'overage-warning',
'overage-exhausted',
].includes(scenario)
if (!preserveExceededLimits) {
exceededLimits = []
}
switch (scenario) {
case 'normal':
mockHeaders = {
'anthropic-ratelimit-unified-status': 'allowed',
'anthropic-ratelimit-unified-reset': String(fiveHoursFromNow),
}
break
case 'session-limit-reached':
exceededLimits = [{ type: 'five_hour', resetsAt: fiveHoursFromNow }]
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
break
case 'approaching-weekly-limit':
mockHeaders = {
'anthropic-ratelimit-unified-status': 'allowed_warning',
'anthropic-ratelimit-unified-reset': String(sevenDaysFromNow),
'anthropic-ratelimit-unified-representative-claim': 'seven_day',
}
break
case 'weekly-limit-reached':
exceededLimits = [{ type: 'seven_day', resetsAt: sevenDaysFromNow }]
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
break
case 'overage-active': {
// If no limits have been exceeded yet, default to 5-hour
if (exceededLimits.length === 0) {
exceededLimits = [{ type: 'five_hour', resetsAt: fiveHoursFromNow }]
}
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-status'] = 'allowed'
// Set overage reset time (monthly)
const endOfMonthActive = new Date()
endOfMonthActive.setMonth(endOfMonthActive.getMonth() + 1, 1)
endOfMonthActive.setHours(0, 0, 0, 0)
mockHeaders['anthropic-ratelimit-unified-overage-reset'] = String(
Math.floor(endOfMonthActive.getTime() / 1000),
)
break
}
case 'overage-warning': {
// If no limits have been exceeded yet, default to 5-hour
if (exceededLimits.length === 0) {
exceededLimits = [{ type: 'five_hour', resetsAt: fiveHoursFromNow }]
}
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-status'] =
'allowed_warning'
// Overage typically resets monthly, but for demo let's say end of month
const endOfMonth = new Date()
endOfMonth.setMonth(endOfMonth.getMonth() + 1, 1)
endOfMonth.setHours(0, 0, 0, 0)
mockHeaders['anthropic-ratelimit-unified-overage-reset'] = String(
Math.floor(endOfMonth.getTime() / 1000),
)
break
}
case 'overage-exhausted': {
// If no limits have been exceeded yet, default to 5-hour
if (exceededLimits.length === 0) {
exceededLimits = [{ type: 'five_hour', resetsAt: fiveHoursFromNow }]
}
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-status'] = 'rejected'
// Both subscription and overage are exhausted
// Subscription resets based on the exceeded limit, overage resets monthly
const endOfMonthExhausted = new Date()
endOfMonthExhausted.setMonth(endOfMonthExhausted.getMonth() + 1, 1)
endOfMonthExhausted.setHours(0, 0, 0, 0)
mockHeaders['anthropic-ratelimit-unified-overage-reset'] = String(
Math.floor(endOfMonthExhausted.getTime() / 1000),
)
break
}
case 'out-of-credits': {
// Out of credits - subscription limit hit, overage rejected due to insufficient credits
// (wallet is empty)
if (exceededLimits.length === 0) {
exceededLimits = [{ type: 'five_hour', resetsAt: fiveHoursFromNow }]
}
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-disabled-reason'] =
'out_of_credits'
const endOfMonth = new Date()
endOfMonth.setMonth(endOfMonth.getMonth() + 1, 1)
endOfMonth.setHours(0, 0, 0, 0)
mockHeaders['anthropic-ratelimit-unified-overage-reset'] = String(
Math.floor(endOfMonth.getTime() / 1000),
)
break
}
case 'org-zero-credit-limit': {
// Org service has zero credit limit - admin set org-level spend cap to $0
// Non-admin Team/Enterprise users should not see "Request extra usage" option
if (exceededLimits.length === 0) {
exceededLimits = [{ type: 'five_hour', resetsAt: fiveHoursFromNow }]
}
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-disabled-reason'] =
'org_service_zero_credit_limit'
const endOfMonthZero = new Date()
endOfMonthZero.setMonth(endOfMonthZero.getMonth() + 1, 1)
endOfMonthZero.setHours(0, 0, 0, 0)
mockHeaders['anthropic-ratelimit-unified-overage-reset'] = String(
Math.floor(endOfMonthZero.getTime() / 1000),
)
break
}
case 'org-spend-cap-hit': {
// Org spend cap hit for the month - org overages temporarily disabled
// Non-admin Team/Enterprise users should not see "Request extra usage" option
if (exceededLimits.length === 0) {
exceededLimits = [{ type: 'five_hour', resetsAt: fiveHoursFromNow }]
}
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-disabled-reason'] =
'org_level_disabled_until'
const endOfMonthHit = new Date()
endOfMonthHit.setMonth(endOfMonthHit.getMonth() + 1, 1)
endOfMonthHit.setHours(0, 0, 0, 0)
mockHeaders['anthropic-ratelimit-unified-overage-reset'] = String(
Math.floor(endOfMonthHit.getTime() / 1000),
)
break
}
case 'member-zero-credit-limit': {
// Member has zero credit limit - admin set this user's individual limit to $0
// Non-admin Team/Enterprise users SHOULD see "Request extra usage" (admin can allocate more)
if (exceededLimits.length === 0) {
exceededLimits = [{ type: 'five_hour', resetsAt: fiveHoursFromNow }]
}
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-disabled-reason'] =
'member_zero_credit_limit'
const endOfMonthMember = new Date()
endOfMonthMember.setMonth(endOfMonthMember.getMonth() + 1, 1)
endOfMonthMember.setHours(0, 0, 0, 0)
mockHeaders['anthropic-ratelimit-unified-overage-reset'] = String(
Math.floor(endOfMonthMember.getTime() / 1000),
)
break
}
case 'seat-tier-zero-credit-limit': {
// Seat tier has zero credit limit - admin set this seat tier's limit to $0
// Non-admin Team/Enterprise users SHOULD see "Request extra usage" (admin can allocate more)
if (exceededLimits.length === 0) {
exceededLimits = [{ type: 'five_hour', resetsAt: fiveHoursFromNow }]
}
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-status'] = 'rejected'
mockHeaders['anthropic-ratelimit-unified-overage-disabled-reason'] =
'seat_tier_zero_credit_limit'
const endOfMonthSeatTier = new Date()
endOfMonthSeatTier.setMonth(endOfMonthSeatTier.getMonth() + 1, 1)
endOfMonthSeatTier.setHours(0, 0, 0, 0)
mockHeaders['anthropic-ratelimit-unified-overage-reset'] = String(
Math.floor(endOfMonthSeatTier.getTime() / 1000),
)
break
}
case 'opus-limit': {
exceededLimits = [{ type: 'seven_day_opus', resetsAt: sevenDaysFromNow }]
updateRepresentativeClaim()
// Always send 429 rejected status - the error handler will decide whether
// to show an error or return NO_RESPONSE_REQUESTED based on fallback eligibility
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
break
}
case 'opus-warning': {
mockHeaders = {
'anthropic-ratelimit-unified-status': 'allowed_warning',
'anthropic-ratelimit-unified-reset': String(sevenDaysFromNow),
'anthropic-ratelimit-unified-representative-claim': 'seven_day_opus',
}
break
}
case 'sonnet-limit': {
exceededLimits = [
{ type: 'seven_day_sonnet', resetsAt: sevenDaysFromNow },
]
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
break
}
case 'sonnet-warning': {
mockHeaders = {
'anthropic-ratelimit-unified-status': 'allowed_warning',
'anthropic-ratelimit-unified-reset': String(sevenDaysFromNow),
'anthropic-ratelimit-unified-representative-claim': 'seven_day_sonnet',
}
break
}
case 'fast-mode-limit': {
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
// Duration in ms (> 20s threshold to trigger cooldown)
mockFastModeRateLimitDurationMs = 10 * 60 * 1000
break
}
case 'fast-mode-short-limit': {
updateRepresentativeClaim()
mockHeaders['anthropic-ratelimit-unified-status'] = 'rejected'
// Duration in ms (< 20s threshold, won't trigger cooldown)
mockFastModeRateLimitDurationMs = 10 * 1000
break
}
case 'extra-usage-required': {
// Headerless 429 — exercises the entitlement-rejection path in errors.ts
mockHeaderless429Message =
'Extra usage is required for long context requests.'
break
}
default:
break
}
}
export function setMockRateLimitScenario(_scenario: MockScenario): void {}
export function getMockHeaderless429Message(): string | null {
if (process.env.USER_TYPE !== 'ant') {
return null
}
// Env var path for -p / SDK testing where slash commands aren't available
if (process.env.CLAUDE_MOCK_HEADERLESS_429) {
return process.env.CLAUDE_MOCK_HEADERLESS_429
}
if (!mockEnabled) {
return null
}
return mockHeaderless429Message
return null
}
export function getMockHeaders(): MockHeaders | null {
if (
!mockEnabled ||
process.env.USER_TYPE !== 'ant' ||
Object.keys(mockHeaders).length === 0
) {
return null
}
return mockHeaders
return null
}
export function getMockStatus(): string {
if (
!mockEnabled ||
(Object.keys(mockHeaders).length === 0 && !mockSubscriptionType)
) {
return 'No mock headers active (using real limits)'
}
const lines: string[] = []
lines.push('Active mock headers:')
// Show subscription type - either explicitly set or default
const effectiveSubscription =
mockSubscriptionType || DEFAULT_MOCK_SUBSCRIPTION
if (mockSubscriptionType) {
lines.push(` Subscription Type: ${mockSubscriptionType} (explicitly set)`)
} else {
lines.push(` Subscription Type: ${effectiveSubscription} (default)`)
}
Object.entries(mockHeaders).forEach(([key, value]) => {
if (value !== undefined) {
// Format the header name nicely
const formattedKey = key
.replace('anthropic-ratelimit-unified-', '')
.replace(/-/g, ' ')
.replace(/\b\w/g, c => c.toUpperCase())
// Format timestamps as human-readable
if (key.includes('reset') && value) {
const timestamp = Number(value)
const date = new Date(timestamp * 1000)
lines.push(` ${formattedKey}: ${value} (${date.toLocaleString()})`)
} else {
lines.push(` ${formattedKey}: ${value}`)
}
}
})
// Show exceeded limits if any
if (exceededLimits.length > 0) {
lines.push('\nExceeded limits (contributing to representative claim):')
exceededLimits.forEach(limit => {
const date = new Date(limit.resetsAt * 1000)
lines.push(` ${limit.type}: resets at ${date.toLocaleString()}`)
})
}
return lines.join('\n')
return 'No mock headers active (using real limits)'
}
export function clearMockHeaders(): void {
mockHeaders = {}
exceededLimits = []
mockSubscriptionType = null
mockFastModeRateLimitDurationMs = null
mockFastModeRateLimitExpiresAt = null
mockHeaderless429Message = null
setMockBillingAccessOverride(null)
mockEnabled = false
}
export function applyMockHeaders(
headers: globalThis.Headers,
): globalThis.Headers {
const mock = getMockHeaders()
if (!mock) {
return headers
}
// Create a new Headers object with original headers
// eslint-disable-next-line eslint-plugin-n/no-unsupported-features/node-builtins
const newHeaders = new globalThis.Headers(headers)
// Apply mock headers (overwriting originals)
Object.entries(mock).forEach(([key, value]) => {
if (value !== undefined) {
newHeaders.set(key, value)
}
})
return newHeaders
return headers
}
// Check if we should process rate limits even without subscription
// This is for Ant employees testing with mocks
export function shouldProcessMockLimits(): boolean {
if (process.env.USER_TYPE !== 'ant') {
return false
}
return mockEnabled || Boolean(process.env.CLAUDE_MOCK_HEADERLESS_429)
return false
}
export function getCurrentMockScenario(): MockScenario | null {
if (!mockEnabled) {
return null
}
// Reverse lookup the scenario from current headers
if (!mockHeaders) return null
const status = mockHeaders['anthropic-ratelimit-unified-status']
const overage = mockHeaders['anthropic-ratelimit-unified-overage-status']
const claim = mockHeaders['anthropic-ratelimit-unified-representative-claim']
if (claim === 'seven_day_opus') {
return status === 'rejected' ? 'opus-limit' : 'opus-warning'
}
if (claim === 'seven_day_sonnet') {
return status === 'rejected' ? 'sonnet-limit' : 'sonnet-warning'
}
if (overage === 'rejected') return 'overage-exhausted'
if (overage === 'allowed_warning') return 'overage-warning'
if (overage === 'allowed') return 'overage-active'
if (status === 'rejected') {
if (claim === 'five_hour') return 'session-limit-reached'
if (claim === 'seven_day') return 'weekly-limit-reached'
}
if (status === 'allowed_warning') {
if (claim === 'seven_day') return 'approaching-weekly-limit'
}
if (status === 'allowed') return 'normal'
return null
}
@@ -802,81 +173,28 @@ export function getScenarioDescription(scenario: MockScenario): string {
}
}
// Mock subscription type management
export function setMockSubscriptionType(
subscriptionType: SubscriptionType | null,
): void {
if (process.env.USER_TYPE !== 'ant') {
return
}
mockEnabled = true
mockSubscriptionType = subscriptionType
}
_subscriptionType: SubscriptionType | null,
): void {}
export function getMockSubscriptionType(): SubscriptionType | null {
if (!mockEnabled || process.env.USER_TYPE !== 'ant') {
return null
}
// Return the explicitly set subscription type, or default to 'max'
return mockSubscriptionType || DEFAULT_MOCK_SUBSCRIPTION
return null
}
// Export a function that checks if we should use mock subscription
export function shouldUseMockSubscription(): boolean {
return (
mockEnabled &&
mockSubscriptionType !== null &&
process.env.USER_TYPE === 'ant'
)
return false
}
// Mock billing access (admin vs non-admin)
export function setMockBillingAccess(hasAccess: boolean | null): void {
if (process.env.USER_TYPE !== 'ant') {
return
}
mockEnabled = true
setMockBillingAccessOverride(hasAccess)
export function setMockBillingAccess(_hasAccess: boolean | null): void {
// External build: internal mock billing access overrides are disabled.
}
// Mock fast mode rate limit handling
export function isMockFastModeRateLimitScenario(): boolean {
return mockFastModeRateLimitDurationMs !== null
return false
}
export function checkMockFastModeRateLimit(
isFastModeActive?: boolean,
_isFastModeActive?: boolean,
): MockHeaders | null {
if (mockFastModeRateLimitDurationMs === null) {
return null
}
// Only throw when fast mode is active
if (!isFastModeActive) {
return null
}
// Check if the rate limit has expired
if (
mockFastModeRateLimitExpiresAt !== null &&
Date.now() >= mockFastModeRateLimitExpiresAt
) {
clearMockHeaders()
return null
}
// Set expiry on first error (not when scenario is configured)
if (mockFastModeRateLimitExpiresAt === null) {
mockFastModeRateLimitExpiresAt =
Date.now() + mockFastModeRateLimitDurationMs
}
// Compute dynamic retry-after based on remaining time
const remainingMs = mockFastModeRateLimitExpiresAt - Date.now()
const headersToSend = { ...mockHeaders }
headersToSend['retry-after'] = String(
Math.max(1, Math.ceil(remainingMs / 1000)),
)
return headersToSend
return null
}

View File

@@ -645,7 +645,7 @@ const internalOnlyTips: Tip[] =
{
id: 'skillify',
content: async () =>
'[internal] Use /skillify at the end of a workflow to turn it into a reusable skill',
'[internal] Turn repeatable workflows into reusable project skills when they keep recurring',
cooldownSessions: 15,
isRelevant: async () => true,
},

View File

@@ -4,12 +4,8 @@ import { registerBatchSkill } from './batch.js'
import { registerClaudeInChromeSkill } from './claudeInChrome.js'
import { registerDebugSkill } from './debug.js'
import { registerKeybindingsSkill } from './keybindings.js'
import { registerLoremIpsumSkill } from './loremIpsum.js'
import { registerRememberSkill } from './remember.js'
import { registerSimplifySkill } from './simplify.js'
import { registerSkillifySkill } from './skillify.js'
import { registerUpdateConfigSkill } from './updateConfig.js'
import { registerVerifySkill } from './verify.js'
/**
* Initialize all bundled skills.
@@ -23,11 +19,7 @@ import { registerVerifySkill } from './verify.js'
export function initBundledSkills(): void {
registerUpdateConfigSkill()
registerKeybindingsSkill()
registerVerifySkill()
registerDebugSkill()
registerLoremIpsumSkill()
registerSkillifySkill()
registerRememberSkill()
registerSimplifySkill()
registerBatchSkill()
if (feature('KAIROS') || feature('KAIROS_DREAM')) {

View File

@@ -1,282 +0,0 @@
import { registerBundledSkill } from '../bundledSkills.js'
// Verified 1-token words (tested via API token counting)
// All common English words confirmed to tokenize as single tokens
const ONE_TOKEN_WORDS = [
// Articles & pronouns
'the',
'a',
'an',
'I',
'you',
'he',
'she',
'it',
'we',
'they',
'me',
'him',
'her',
'us',
'them',
'my',
'your',
'his',
'its',
'our',
'this',
'that',
'what',
'who',
// Common verbs
'is',
'are',
'was',
'were',
'be',
'been',
'have',
'has',
'had',
'do',
'does',
'did',
'will',
'would',
'can',
'could',
'may',
'might',
'must',
'shall',
'should',
'make',
'made',
'get',
'got',
'go',
'went',
'come',
'came',
'see',
'saw',
'know',
'take',
'think',
'look',
'want',
'use',
'find',
'give',
'tell',
'work',
'call',
'try',
'ask',
'need',
'feel',
'seem',
'leave',
'put',
// Common nouns & adjectives
'time',
'year',
'day',
'way',
'man',
'thing',
'life',
'hand',
'part',
'place',
'case',
'point',
'fact',
'good',
'new',
'first',
'last',
'long',
'great',
'little',
'own',
'other',
'old',
'right',
'big',
'high',
'small',
'large',
'next',
'early',
'young',
'few',
'public',
'bad',
'same',
'able',
// Prepositions & conjunctions
'in',
'on',
'at',
'to',
'for',
'of',
'with',
'from',
'by',
'about',
'like',
'through',
'over',
'before',
'between',
'under',
'since',
'without',
'and',
'or',
'but',
'if',
'than',
'because',
'as',
'until',
'while',
'so',
'though',
'both',
'each',
'when',
'where',
'why',
'how',
// Common adverbs
'not',
'now',
'just',
'more',
'also',
'here',
'there',
'then',
'only',
'very',
'well',
'back',
'still',
'even',
'much',
'too',
'such',
'never',
'again',
'most',
'once',
'off',
'away',
'down',
'out',
'up',
// Tech/common words
'test',
'code',
'data',
'file',
'line',
'text',
'word',
'number',
'system',
'program',
'set',
'run',
'value',
'name',
'type',
'state',
'end',
'start',
]
function generateLoremIpsum(targetTokens: number): string {
let tokens = 0
let result = ''
while (tokens < targetTokens) {
// Sentence: 10-20 words
const sentenceLength = 10 + Math.floor(Math.random() * 11)
let wordsInSentence = 0
for (let i = 0; i < sentenceLength && tokens < targetTokens; i++) {
const word =
ONE_TOKEN_WORDS[Math.floor(Math.random() * ONE_TOKEN_WORDS.length)]
result += word
tokens++
wordsInSentence++
if (i === sentenceLength - 1 || tokens >= targetTokens) {
result += '. '
} else {
result += ' '
}
}
// Paragraph break every 5-8 sentences (roughly 20% chance per sentence)
if (wordsInSentence > 0 && Math.random() < 0.2 && tokens < targetTokens) {
result += '\n\n'
}
}
return result.trim()
}
export function registerLoremIpsumSkill(): void {
if (process.env.USER_TYPE !== 'ant') {
return
}
registerBundledSkill({
name: 'lorem-ipsum',
description:
'Generate filler text for long context testing. Specify token count as argument (e.g., /lorem-ipsum 50000). Outputs approximately the requested number of tokens. Ant-only.',
argumentHint: '[token_count]',
userInvocable: true,
async getPromptForCommand(args) {
const parsed = parseInt(args)
if (args && (isNaN(parsed) || parsed <= 0)) {
return [
{
type: 'text',
text: 'Invalid token count. Please provide a positive number (e.g., /lorem-ipsum 10000).',
},
]
}
const targetTokens = parsed || 10000
// Cap at 500k tokens for safety
const cappedTokens = Math.min(targetTokens, 500_000)
if (cappedTokens < targetTokens) {
return [
{
type: 'text',
text: `Requested ${targetTokens} tokens, but capped at 500,000 for safety.\n\n${generateLoremIpsum(cappedTokens)}`,
},
]
}
const loremText = generateLoremIpsum(cappedTokens)
// Just dump the lorem ipsum text into the conversation
return [
{
type: 'text',
text: loremText,
},
]
},
})
}

View File

@@ -1,82 +0,0 @@
import { isAutoMemoryEnabled } from '../../memdir/paths.js'
import { registerBundledSkill } from '../bundledSkills.js'
export function registerRememberSkill(): void {
if (process.env.USER_TYPE !== 'ant') {
return
}
const SKILL_PROMPT = `# Memory Review
## Goal
Review the user's memory landscape and produce a clear report of proposed changes, grouped by action type. Do NOT apply changes — present proposals for user approval.
## Steps
### 1. Gather all memory layers
Read CLAUDE.md and CLAUDE.local.md from the project root (if they exist). Your auto-memory content is already in your system prompt — review it there. Note which team memory sections exist, if any.
**Success criteria**: You have the contents of all memory layers and can compare them.
### 2. Classify each auto-memory entry
For each substantive entry in auto-memory, determine the best destination:
| Destination | What belongs there | Examples |
|---|---|---|
| **CLAUDE.md** | Project conventions and instructions for Claude that all contributors should follow | "use bun not npm", "API routes use kebab-case", "test command is bun test", "prefer functional style" |
| **CLAUDE.local.md** | Personal instructions for Claude specific to this user, not applicable to other contributors | "I prefer concise responses", "always explain trade-offs", "don't auto-commit", "run tests before committing" |
| **Team memory** | Org-wide knowledge that applies across repositories (only if team memory is configured) | "deploy PRs go through #deploy-queue", "staging is at staging.internal", "platform team owns infra" |
| **Stay in auto-memory** | Working notes, temporary context, or entries that don't clearly fit elsewhere | Session-specific observations, uncertain patterns |
**Important distinctions:**
- CLAUDE.md and CLAUDE.local.md contain instructions for Claude, not user preferences for external tools (editor theme, IDE keybindings, etc. don't belong in either)
- Workflow practices (PR conventions, merge strategies, branch naming) are ambiguous — ask the user whether they're personal or team-wide
- When unsure, ask rather than guess
**Success criteria**: Each entry has a proposed destination or is flagged as ambiguous.
### 3. Identify cleanup opportunities
Scan across all layers for:
- **Duplicates**: Auto-memory entries already captured in CLAUDE.md or CLAUDE.local.md → propose removing from auto-memory
- **Outdated**: CLAUDE.md or CLAUDE.local.md entries contradicted by newer auto-memory entries → propose updating the older layer
- **Conflicts**: Contradictions between any two layers → propose resolution, noting which is more recent
**Success criteria**: All cross-layer issues identified.
### 4. Present the report
Output a structured report grouped by action type:
1. **Promotions** — entries to move, with destination and rationale
2. **Cleanup** — duplicates, outdated entries, conflicts to resolve
3. **Ambiguous** — entries where you need the user's input on destination
4. **No action needed** — brief note on entries that should stay put
If auto-memory is empty, say so and offer to review CLAUDE.md for cleanup.
**Success criteria**: User can review and approve/reject each proposal individually.
## Rules
- Present ALL proposals before making any changes
- Do NOT modify files without explicit user approval
- Do NOT create new files unless the target doesn't exist yet
- Ask about ambiguous entries — don't guess
`
registerBundledSkill({
name: 'remember',
description:
'Review auto-memory entries and propose promotions to CLAUDE.md, CLAUDE.local.md, or shared memory. Also detects outdated, conflicting, and duplicate entries across memory layers.',
whenToUse:
'Use when the user wants to review, organize, or promote their auto-memory entries. Also useful for cleaning up outdated or conflicting entries across CLAUDE.md, CLAUDE.local.md, and auto-memory.',
userInvocable: true,
isEnabled: () => isAutoMemoryEnabled(),
async getPromptForCommand(args) {
let prompt = SKILL_PROMPT
if (args) {
prompt += `\n## Additional context from user\n\n${args}`
}
return [{ type: 'text', text: prompt }]
},
})
}

View File

@@ -1,197 +0,0 @@
import { getSessionMemoryContent } from '../../services/SessionMemory/sessionMemoryUtils.js'
import type { Message } from '../../types/message.js'
import { getMessagesAfterCompactBoundary } from '../../utils/messages.js'
import { registerBundledSkill } from '../bundledSkills.js'
function extractUserMessages(messages: Message[]): string[] {
return messages
.filter((m): m is Extract<typeof m, { type: 'user' }> => m.type === 'user')
.map(m => {
const content = m.message.content
if (typeof content === 'string') return content
return content
.filter(
(b): b is Extract<typeof b, { type: 'text' }> => b.type === 'text',
)
.map(b => b.text)
.join('\n')
})
.filter(text => text.trim().length > 0)
}
const SKILLIFY_PROMPT = `# Skillify {{userDescriptionBlock}}
You are capturing this session's repeatable process as a reusable skill.
## Your Session Context
Here is the session memory summary:
<session_memory>
{{sessionMemory}}
</session_memory>
Here are the user's messages during this session. Pay attention to how they steered the process, to help capture their detailed preferences in the skill:
<user_messages>
{{userMessages}}
</user_messages>
## Your Task
### Step 1: Analyze the Session
Before asking any questions, analyze the session to identify:
- What repeatable process was performed
- What the inputs/parameters were
- The distinct steps (in order)
- The success artifacts/criteria (e.g. not just "writing code," but "an open PR with CI fully passing") for each step
- Where the user corrected or steered you
- What tools and permissions were needed
- What agents were used
- What the goals and success artifacts were
### Step 2: Interview the User
You will use the AskUserQuestion to understand what the user wants to automate. Important notes:
- Use AskUserQuestion for ALL questions! Never ask questions via plain text.
- For each round, iterate as much as needed until the user is happy.
- The user always has a freeform "Other" option to type edits or feedback -- do NOT add your own "Needs tweaking" or "I'll provide edits" option. Just offer the substantive choices.
**Round 1: High level confirmation**
- Suggest a name and description for the skill based on your analysis. Ask the user to confirm or rename.
- Suggest high-level goal(s) and specific success criteria for the skill.
**Round 2: More details**
- Present the high-level steps you identified as a numbered list. Tell the user you will dig into the detail in the next round.
- If you think the skill will require arguments, suggest arguments based on what you observed. Make sure you understand what someone would need to provide.
- If it's not clear, ask if this skill should run inline (in the current conversation) or forked (as a sub-agent with its own context). Forked is better for self-contained tasks that don't need mid-process user input; inline is better when the user wants to steer mid-process.
- Ask where the skill should be saved. Suggest a default based on context (repo-specific workflows → repo, cross-repo personal workflows → user). Options:
- **This repo** (\`.claude/skills/<name>/SKILL.md\`) — for workflows specific to this project
- **Personal** (\`~/.claude/skills/<name>/SKILL.md\`) — follows you across all repos
**Round 3: Breaking down each step**
For each major step, if it's not glaringly obvious, ask:
- What does this step produce that later steps need? (data, artifacts, IDs)
- What proves that this step succeeded, and that we can move on?
- Should the user be asked to confirm before proceeding? (especially for irreversible actions like merging, sending messages, or destructive operations)
- Are any steps independent and could run in parallel? (e.g., posting to Slack and monitoring CI at the same time)
- How should the skill be executed? (e.g. always use a Task agent to conduct code review, or invoke an agent team for a set of concurrent steps)
- What are the hard constraints or hard preferences? Things that must or must not happen?
You may do multiple rounds of AskUserQuestion here, one round per step, especially if there are more than 3 steps or many clarification questions. Iterate as much as needed.
IMPORTANT: Pay special attention to places where the user corrected you during the session, to help inform your design.
**Round 4: Final questions**
- Confirm when this skill should be invoked, and suggest/confirm trigger phrases too. (e.g. For a cherrypick workflow you could say: Use when the user wants to cherry-pick a PR to a release branch. Examples: 'cherry-pick to release', 'CP this PR', 'hotfix.')
- You can also ask for any other gotchas or things to watch out for, if it's still unclear.
Stop interviewing once you have enough information. IMPORTANT: Don't over-ask for simple processes!
### Step 3: Write the SKILL.md
Create the skill directory and file at the location the user chose in Round 2.
Use this format:
\`\`\`markdown
---
name: {{skill-name}}
description: {{one-line description}}
allowed-tools:
{{list of tool permission patterns observed during session}}
when_to_use: {{detailed description of when Claude should automatically invoke this skill, including trigger phrases and example user messages}}
argument-hint: "{{hint showing argument placeholders}}"
arguments:
{{list of argument names}}
context: {{inline or fork -- omit for inline}}
---
# {{Skill Title}}
Description of skill
## Inputs
- \`$arg_name\`: Description of this input
## Goal
Clearly stated goal for this workflow. Best if you have clearly defined artifacts or criteria for completion.
## Steps
### 1. Step Name
What to do in this step. Be specific and actionable. Include commands when appropriate.
**Success criteria**: ALWAYS include this! This shows that the step is done and we can move on. Can be a list.
IMPORTANT: see the next section below for the per-step annotations you can optionally include for each step.
...
\`\`\`
**Per-step annotations**:
- **Success criteria** is REQUIRED on every step. This helps the model understand what the user expects from their workflow, and when it should have the confidence to move on.
- **Execution**: \`Direct\` (default), \`Task agent\` (straightforward subagents), \`Teammate\` (agent with true parallelism and inter-agent communication), or \`[human]\` (user does it). Only needs specifying if not Direct.
- **Artifacts**: Data this step produces that later steps need (e.g., PR number, commit SHA). Only include if later steps depend on it.
- **Human checkpoint**: When to pause and ask the user before proceeding. Include for irreversible actions (merging, sending messages), error judgment (merge conflicts), or output review.
- **Rules**: Hard rules for the workflow. User corrections during the reference session can be especially useful here.
**Step structure tips:**
- Steps that can run concurrently use sub-numbers: 3a, 3b
- Steps requiring the user to act get \`[human]\` in the title
- Keep simple skills simple -- a 2-step skill doesn't need annotations on every step
**Frontmatter rules:**
- \`allowed-tools\`: Minimum permissions needed (use patterns like \`Bash(gh:*)\` not \`Bash\`)
- \`context\`: Only set \`context: fork\` for self-contained skills that don't need mid-process user input.
- \`when_to_use\` is CRITICAL -- tells the model when to auto-invoke. Start with "Use when..." and include trigger phrases. Example: "Use when the user wants to cherry-pick a PR to a release branch. Examples: 'cherry-pick to release', 'CP this PR', 'hotfix'."
- \`arguments\` and \`argument-hint\`: Only include if the skill takes parameters. Use \`$name\` in the body for substitution.
### Step 4: Confirm and Save
Before writing the file, output the complete SKILL.md content as a yaml code block in your response so the user can review it with proper syntax highlighting. Then ask for confirmation using AskUserQuestion with a simple question like "Does this SKILL.md look good to save?" — do NOT use the body field, keep the question concise.
After writing, tell the user:
- Where the skill was saved
- How to invoke it: \`/{{skill-name}} [arguments]\`
- That they can edit the SKILL.md directly to refine it
`
export function registerSkillifySkill(): void {
if (process.env.USER_TYPE !== 'ant') {
return
}
registerBundledSkill({
name: 'skillify',
description:
"Capture this session's repeatable process into a skill. Call at end of the process you want to capture with an optional description.",
allowedTools: [
'Read',
'Write',
'Edit',
'Glob',
'Grep',
'AskUserQuestion',
'Bash(mkdir:*)',
],
userInvocable: true,
disableModelInvocation: true,
argumentHint: '[description of the process you want to capture]',
async getPromptForCommand(args, context) {
const sessionMemory =
(await getSessionMemoryContent()) ?? 'No session memory available.'
const userMessages = extractUserMessages(
getMessagesAfterCompactBoundary(context.messages),
)
const userDescriptionBlock = args
? `The user described this process as: "${args}"`
: ''
const prompt = SKILLIFY_PROMPT.replace('{{sessionMemory}}', sessionMemory)
.replace('{{userMessages}}', userMessages.join('\n\n---\n\n'))
.replace('{{userDescriptionBlock}}', userDescriptionBlock)
return [{ type: 'text', text: prompt }]
},
})
}

View File

@@ -1,48 +0,0 @@
import { parseFrontmatter } from '../../utils/frontmatterParser.js'
import { registerBundledSkill } from '../bundledSkills.js'
function loadVerifyContent(): { skillMd: string; skillFiles: Record<string, string> } {
try {
/* eslint-disable @typescript-eslint/no-require-imports */
const { SKILL_FILES, SKILL_MD } = require('./verifyContent.js') as {
SKILL_FILES: Record<string, string>
SKILL_MD: string
}
/* eslint-enable @typescript-eslint/no-require-imports */
return { skillMd: SKILL_MD, skillFiles: SKILL_FILES }
} catch {
return {
skillMd:
'# Verify\n\nVerify a code change does what it should by running the app.',
skillFiles: {},
}
}
}
export function registerVerifySkill(): void {
if (process.env.USER_TYPE !== 'ant') {
return
}
const { skillMd, skillFiles } = loadVerifyContent()
const { frontmatter, content: skillBody } = parseFrontmatter(skillMd)
const description =
typeof frontmatter.description === 'string'
? frontmatter.description
: 'Verify a code change does what it should by running the app.'
registerBundledSkill({
name: 'verify',
description,
userInvocable: true,
files: skillFiles,
async getPromptForCommand(args) {
const parts: string[] = [skillBody.trimStart()]
if (args) {
parts.push(`## User Request\n\n${args}`)
}
return [{ type: 'text', text: parts.join('\n\n') }]
},
})
}