Harry Foster at Siemens EDA published a technical argument that deserves attention: the productivity wall in RTL verification is not engine-limited anymore. Faster simulators, scalable formal tools, and bigger solvers have delivered most of their gains. What remains is the coordination overhead -- interpreting results, refining test intent, adjusting coverage strategies across multiple tools and iterations. That is a workflow intelligence problem, not a raw compute problem.
The agentic framing here is specific and useful. Unlike bolted-on AI that parses log files or generates scripts in isolation, the model Foster describes requires engine-native interfaces -- controlled entry points that let an agent invoke simulations, query coverage state, and analyze failures directly. That is the important architectural distinction. An agent that interacts with tool semantics (not just text output) can reason across verification iterations with continuity, which is exactly what coverage closure requires.
The human-in-the-loop design choice is also notable. Foster is explicit that full autonomy is not the goal and is not desirable. Sign-off authority stays with the engineer; AI accelerates the execution and surfaces insights. This is the right call for a field where "good enough" is rarely binary and specifications are often incomplete. It also maps to how liability works in tapeout -- someone has to own the decision.
What this piece does not address is the integration moat. Siemens EDA has the advantage of owning both the tools and the agentic layer, which means they can expose the engine-native interfaces Foster describes. An independent agentic layer sitting on top of a competitor's toolchain faces a very different integration problem. This is why the EDA vendors will likely own the agentic verification layer -- not because of AI capability, but because of tool access.