Loading...
Loading...
Simon Willison is expanding the LLM developer tooling around his LLM and Datasette ecosystems, with a focus on observability and operational control. New releases of the llm-echo debugging plugin (0.3 and 0.4) add better test doubles for tool-calling and raw output inspection, a model that simulates “API key required” flows, and token tracing via input_tokens/output_tokens fields for measuring usage and diagnosing prompt behavior. In parallel, datasette-llm 0.1a4 introduces per-purpose API key configuration, enabling teams to isolate billing, rate limits, and permissions across different model tasks.
Simon Willison released datasette-llm 0.1a5 on April 1, 2026 — a new alpha version of his Datasette plugin that provides an LLM integration layer other plugins can depend on. The announcement appears as a brief release post on his weblog, positioning datasette-llm as a foundational plugin to enable LLM-powered features within the Datasette ecosystem. Key players include Simon Willison and the Datasette project; the release matters because it standardizes how plugins can leverage large language models inside a data exploration and publishing tool, potentially accelerating developer workflows and third-party integrations. The post is a concise release note without technical details or changelog items.
Simon Willison released llm-echo 0.3, a debugging plugin for large language models that provides an echo model useful for testing tool calls and raw responses. The update adds mechanisms for validating tool-call behavior and inspecting unmodified model outputs, plus a new echo-needs-key model designed to test logic around model API keys. This release helps developers and engineers reproduce and debug LLM integration issues by echoing inputs, simulating key-requirement flows, and verifying how models interact with external tools. It matters because reliable, inspectable test doubles simplify development and troubleshooting of LLM-based systems, improving robustness for apps that orchestrate models and external toolchains. Key player: Simon Willison.
Simon Willison released llm-echo 0.4, a debug plugin for large language models that provides an echo model and now populates response objects with input_tokens and output_tokens fields. The update, posted March 31, 2026, makes token-level tracing of prompts available to developers diagnosing model behavior and prompt engineering issues. By exposing both input and output token arrays, llm-echo 0.4 helps teams measure token usage, inspect generation boundaries, and reproduce problematic outputs more easily. This is useful for debugging, cost analysis, and building reliable LLM integrations in applications. The plugin strengthens developer tooling around observability for LLM deployments.
Simon Willison announced datasette-llm 0.1a4, a new pre-release of his Datasette plugin that provides LLM integration for other plugins to depend on. The key new capability is configuring different API keys per model purpose (e.g., routing enrichment calls to gpt-5.4-mini with a dedicated key). Willison also released llm-echo 0.3 as an API-key testing utility to support tests for this feature. This matters for developers building extensible Datasette-based tooling that needs to isolate billing, rate limits, or permissions across model uses, improving operational control and safer integration of multiple LLMs within the Datasette ecosystem. The posts are dated 31 March 2026.