System-level SoC verification has a test content problem. Simulation capacity and emulation throughput are both expanding. The bottleneck is not compute; it is that verification engineers spend the majority of their time hand-writing scenario models for system-level tests, and manual test composition reliably misses corner cases that only emerge from the interaction of multiple subsystems under load. Breker and Moores Lab AI have built a flow that attacks that directly: Moores Lab's VerifAgent reads a specification and generates scenario models, then Breker's Trek Test Suite synthesis converts those models into C and SystemVerilog tests targeting complex SoC corner cases on simulation and emulation platforms.
The mechanism is synthesis-based, not template-based. Breker's existing technology generates test scenarios from formal models rather than hand-coded scripts, which is why their tools already cover more than half of all RISC-V processor core verification globally. The VerifAgent layer extends that from individual cores to full SoC systems, where the design space is an order of magnitude larger. The partnership launched at DVCon US earlier this year. Customer adoption is reportedly accelerating in 2026, with Breker adding two more Magnificent 7 companies to its portfolio and doubling its total customer base.
The loser here is homebrew verification infrastructure. Large semiconductor teams have maintained custom test generation frameworks for system scenarios for years because commercial tools didn't reach that layer. If Breker's synthesis plus agentic spec-reading can generate the scenarios those frameworks produce, the case for maintaining the homebrew stack weakens fast. The teams still investing in custom test generation tooling should evaluate this flow before committing another engineering year to infrastructure that is now commercially contested.