Two months ago Design Conductor built a 5-stage RISC-V CPU from scratch in 12 hours. The follow-up paper posted Thursday shows version 2.0 building VerTQ, a complete LLM inference accelerator, in 80 hours, fully autonomously, starting from the TurboQuant arXiv paper as the only spec. VerTQ has 5,129 FP16/32 processing elements in a 240-cycle pipeline, maps to FPGA at 125 MHz, and places and routes at 5.7 mm^2 in TSMC 16FF. The system handled a task 80x more complex than its predecessor in two months of model iteration.
The architecture change in Design Conductor 2.0 is the mechanism: a multi-agent harness that partitions the design into submodules, assigns each to a parallel sub-agent, and runs a coordinator that reconciles interfaces and triggers DRC. Prior agentic chip design systems serialized RTL generation through a single context, so the task complexity ceiling was context length. A partitioned multi-agent approach removes that ceiling and maps naturally to how human design teams actually work (parallel sub-teams with interface contracts). The FPGA validation step is not a shortcut: it runs gate-level simulation and catches timing violations. VerTQ was verified on hardware, not just synthesized.
The prior Design Conductor post noted the 12-hour RISC-V result as a calibration point, not a plateau (see the April 24 Signal). The 80x improvement in two months confirms that. The constraint being removed is not RTL typing speed; it is the human bottleneck in translating a numerical algorithm into a synthesizable hardware description. If this trajectory holds (and there is no obvious structural reason it does not), the class of custom accelerator designs that requires a senior RTL engineer for the full loop shrinks significantly within 12-18 months. Fabless startups with an idea and a model are about to have a much shorter path to a tapeout-ready design.