H.E.R.A.L.D
Human Executive Reasoning Agent Locally Deployed
>>
▼DOWNLOAD FOR macOS
Always-on. Just speak. No wake word, no key press, no setup ritual.
Files, terminal, browser — Claude executes real tasks, not just text.
Extend with any MCP server. All accessible by voice.
TTS begins before the model finishes. Near-zero perceived latency.
Semantic search across past sessions. Ask about a decision from three weeks ago — Herald finds it.
Speak over Herald and it stops mid-sentence. Like a real conversation.
Charts and holographic panels rendered on command. Voice triggers real-time data display.
HTML panels for tables, dashboards, and comparisons. Transparent overlay, JARVIS-style.
MCPPortfolio data in the demo is powered by firma — an open-source MCP server for personal finance.
Out of the box, Claude forgets everything and knows nothing about you. These two fix that.
Portfolio tracking, net worth, and cash flow — stored in local SQLite. Ask by voice: "How's my portfolio doing this month?"
npm i -g @evan-moon/firma && firma mcp installPersistent second brain with semantic search. Herald saves and recalls context across sessions — no context window limits.
npm i -g @evan-moon/memex && memex mcp installOSSOpen-source · MIT · Works standalone with Claude Code, Claude Desktop, or any MCP client
Herald introducing itself. Always listening — just speak.
How much does it cost to run?
Herald itself is free. You pay OpenAI for transcription/TTS (roughly $0.02–0.10 per active hour) and Anthropic via Claude Code for reasoning. Idle listening costs nothing — only active utterances are sent.
Is my voice recorded or stored?
No. Audio stays on your Mac until you actually speak. Each utterance is sent to OpenAI for transcription, then discarded. Nothing logged, nothing uploaded.
Does it work offline?
No. STT (OpenAI), reasoning (Claude), and TTS (OpenAI) all need internet. The VAD and orb run locally.
Do I need Claude Code?
Yes — Herald uses the Claude Code agent SDK for reasoning and tool execution. Your existing Claude subscription powers it.
What languages does it support?
Korean and English out of the box. Whisper handles transcription in either; Herald responds in the language you set.
Why is it always listening?
Voice activity detection (VAD) gates everything. The mic stream stays local — nothing leaves your machine until VAD detects actual speech, then the clip goes to STT.
PRIVACYAudio stays on your Mac until you speak. Each utterance goes to OpenAI for transcription, then is discarded. Nothing logged, nothing stored.
Free · Requires OpenAI API key & Claude Code