I've led engineering at healthcare-IT companies, held CTO roles at Alopex and BettrLife, and now build open-source AI tooling — SpecCritic, PlanCritic, Prism — for teams trying to make LLMs actually reliable.
Every non-deterministic step in an agent workflow deserves a deterministic gate — and context selection is the one nobody's building.
Why the command line — not the chat window — is where AI-native development is actually happening.
A practical case for bounded context: what to hide from the model, what to show it, and why the default of "paste the whole repo" is the wrong one.
Evaluates specifications as formal contracts. Finds contradictions, hidden assumptions, and missing prerequisites before a line of code is written.
Reviews implementation plans and returns structured critique: contradictions, ambiguities, missing prerequisites, and concrete patches.
Local-first AI code review CLI. Reviews diffs, commits, and PRs across Anthropic, OpenAI, Gemini, and Ollama — with SARIF output and CI-friendly exit codes.
Intent enforcement for agentic systems. Verifies that the code an agent produced matches the spec and plan it was given — not just that it compiles.
Currently available for consulting — systems architecture, developer tooling, LLM infrastructure, and fractional CTO engagements.