Add the MCP doctor subcommand with text and JSON output, config-only mode, and scope filtering so users can diagnose MCP issues from the CLI.
Co-Authored-By: Claude <noreply@anthropic.com>
Add the diagnostics core and report model for MCP health, scope, and config analysis. This creates the structured report used by both text and JSON doctor output.
Co-Authored-By: Claude <noreply@anthropic.com>
When using non-Anthropic providers (Ollama, Gemini, Codex), the
underlying call to queryHaiku for session title generation fails.
Previously, this caused the catch block to return null, leaving the
terminal tab permanently stuck on 'Claude Code'.
Now, when the API call fails, we gracefully derive a title locally from
the user's first message (first 7 words, sentence-cased), ensuring
users still see a meaningful session title in their terminal tab.
Azure OpenAI API rejects the max_tokens parameter and requires
max_completion_tokens instead. This change ensures the conversion
is robust by validating that max_tokens is a positive number before
using it, preventing edge cases like null or "null" string values
from being incorrectly sent.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Apply the existing ACCENT colour (rgb 240 148 100) to the version
string so it stands out against the dim label, matching the warm
orange used throughout the startup screen for stars and status text.
Requested in #95.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
StartupScreen.ts was reading the version via globalThis['MACRO_DISPLAY_VERSION']
which is never populated — the Bun bundler inlines it as MACRO.DISPLAY_VERSION
(dot notation), not as a globalThis key.
Result: startup screen always showed the hardcoded fallback 'v0.1.4' regardless
of the installed version.
Fix: use MACRO.DISPLAY_VERSION ?? MACRO.VERSION directly, consistent with
cli.tsx, main.tsx, and logoV2Utils.ts.
Fixes#95
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Two bugs in convertTools() caused Gemini's OpenAI-compatible endpoint
to reject tool schemas with 400 "schema requires unspecified property":
1. The Agent tool patch unconditionally pushed 'message' into required[]
even though 'message' is not a property of the Agent schema. Gemini
strictly validates that every key in required[] exists in properties.
2. normalizeSchemaForOpenAI() added all property keys to required[] for
OpenAI strict mode, but this conflicts with Gemini's stricter schema
validation which rejects required keys absent from properties.
Fix:
- Agent tool patch now only adds a key to required[] if it exists in
schema.properties (fixes the 'message' 400 error on Gemini)
- normalizeSchemaForOpenAI() accepts a strict flag: true for OpenAI
(promotes all property keys into required[]), false for Gemini
(filters required[] to only keys present in properties)
- convertTools() detects CLAUDE_CODE_USE_GEMINI and passes strict=false
Fixes#82
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Update the stale test expectation to match current behavior where
normalizeSchemaForOpenAI() promotes all properties into required[]
and marks the schema as strict: true.
Same fix as PR #72 — included here so PR #80 passes CI independently.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Azure OpenAI and newer OpenAI models (o1, o3, o4...) reject `max_tokens`
with a 400 error and require `max_completion_tokens` instead.
Maps `params.max_tokens` → `max_completion_tokens` in the request body,
which is the current standard across OpenAI-compatible providers.
Adds a new startup screen with filled-block text logo and sunset
gradient, printed to stdout before the Ink UI loads. Removes the
old OPEN box logo from the chat UI since the new screen replaces it.
Changes:
- src/components/StartupScreen.ts (NEW) — gradient OPEN CLAUDE logo
with provider info box (Provider, Model, Endpoint). Auto-detects
active provider from env vars (OpenAI, Gemini, DeepSeek, Ollama,
Groq, Mistral, Azure, LM Studio, Anthropic). Skipped in CI and
non-TTY environments.
- src/entrypoints/cli.tsx — calls printStartupScreen() at startup
before Ink renders
- src/components/Messages.tsx — removes <LogoV2 /> from LogoHeader
so the old OPEN box logo no longer appears in the chat UI
Addresses #55.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit addresses strict schema validation limitations when running subagents under OpenAI backend shims.
- Drops empty properties from payloads (like Record<string, string>) that break OpenAI's Structured Outputs validation.
- Handles edge cases for automated initial teams when subagents bypass standard creation routines.
- Aborts sending unsupported experimental backend parameters like temperature and top_p for GPT-5 derivatives.
Load nested SKILL.md files from .claude/skills and namespace them with colons so category-based skill layouts work in Claude Code clients.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The /login platform_setup screen only listed Amazon Bedrock,
Microsoft Foundry, and Vertex AI — OpenAI-compatible providers
and Gemini were completely absent, leaving users with no guidance
on how to use OpenClaude's main feature.
Changes:
- Selector label: "Amazon Bedrock, Microsoft Foundry, or Vertex AI"
→ "OpenAI, Gemini, Bedrock, Ollama, and more"
- Description updated to mention OpenAI-compatible providers and Gemini
- Added OpenAI and Gemini env var instructions to the docs list
Fixes#43 (login screen confusion for Gemini users).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>