The Accidental Comeback of Verilog
How Generative AI is ending the HDL vs HLS debate
In 2022, a graduate-level chip design course at UCLA gave me a glimpse of what felt like the future. Instead of writing RTL in Verilog, we were required to use a Python framework that generated synthesizable Verilog under the hood. I had already been using Verilog for a few years by this point, so this new workflow felt strange at first. But after a few weeks, something clicked. I started to enjoy it. So much so that I made my final project a head-to-head comparison: plain Verilog versus this Python-based approach across several types of RTL IP. The results seemed clear. Yes, the Verilog implementations had slightly better power, performance, and area (PPA) metrics. But the Python framework won on almost everything else that mattered to an RTL designer: it scaled better, made reuse trivial, and produced RTL that was far easier to read and reason about.
I walked away convinced that this was the direction chip design was headed. Higher-level frameworks would replace handwritten Verilog. That’s where I was ready to place my bets.
Lucky for me, I didn’t have the money to bet - because I was completely wrong.
Verilog Is Fine. What Next?
If you have been reading Chip Insights long enough, you’ll know about my two-part HDL saga documenting the story of how Verilog became what it is today. I ended this story in the mid-2000s, when EDA companies started to push for the widespread adoption of SystemVerilog. From this point forward, the Verilog family (plain Verilog and SystemVerilog) emerged as clear winners in the HDL wars. However, there was a parallel storyline which I did not cover in that post.
The story of HDLs was closely related to the emergence of a new simulation technique called behavioral simulation - as designs got bigger, gate-level simulations became slow and unusable for routine functional checks. Cadence enabled behavioral simulation in Verilog through Verilog Compiled Simulation (VCS) - Verilog was first compiled into C, and this C program was then used to run simulations. Great story. The bedrock of modern logic simulation. But that’s not the point here. If you can convert an HDL like Verilog into a high-level language like C, why can’t we start with a high-level language directly?
In 1998, Forte Design Systems came up with a tool called Cynthesizer (a pretty clever name, I must add), which allowed a designer to build synthesizable logic using SystemC. In 2001, Sony became the first company to tape out a chip using this approach, which would later be known as High-Level Synthesis (HLS). As transistor scaling continued in the 2000s, design cost and complexity grew rapidly. Even the most modern HDL of the time, SystemVerilog, was inadequate for human productivity to catch up with Moore’s law. This made HLS a compelling candidate.
There are two primary reasons why HLS was so attractive. The first is obvious: high-level languages provide constructs that HDLs lack which would boost the productivity of chip designers. The second reason was even more compelling. HLS represented a fundamental “shift left” strategy to move hardware design earlier in the development cycle and closer to software design. In other words, anyone who can code should be able to design the hardware to run their code. Believers of HLS shared the same optimism that I did when I used Python for RTL design. In fact, in the mid 2000s, Professor Jason Cong built the xPilot HLS system in the same UCLA building. xPilot pioneered a lot of algorithmic innovations that made HLS more than just a convenience - the PPA metrics started to improve as well. xPilot became AutoESL Design Technologies, was acquired by Xilinx (now AMD) in 2011.
So far, this sounds more like a success story of HLS than what my title suggests. So what happened? HLS definitely proved to be useful in certain cases - like rapid prototyping and deployment on FPGAs. (The core technology of xPilot powers Vivado HLS, which is still used widely today.) However, HLS could never fully replace Verilog, because the promise of HLS was too good to be true. Even today, HLS-generated designs often consume more hardware resources and achieve significantly worse clock frequencies compared to handwritten RTL. More importantly, HLS never truly became what it promised. The fundamental mismatch between software-oriented C/C++ languages and hardware structures meant that HLS tools still can’t reliably produce synthesizable designs that integrate well with existing EDA tools which remain overwhelmingly Verilog-centric.
As a compromise, the idea of a Hardware Construction Language (HCL) was born. While HLS tools were intended to automatically infer the best microarchitecture from high-level algorithms, HCLs are high-level language extensions that allow designers to express hardware structure, while simultaneously leveraging powerful software constructs. Chisel, an extension of Scala, was developed at UC Berkeley in 2012 and is one of the most successful examples of an HCL. By the way, the Python framework I mentioned at the start of this post was also an HCL.
While HLS and HCLs were not perfect, they seemed like early versions of what the future of chip design would look like. HLS algorithmic innovations continue to improve PPA metrics, and a lot of new processor designs are built using frameworks like Chisel. All this while, the Verilog family has not made any major strides. If I had written this post a few years ago, this would be the end of the HDL story. But almost overnight, a new playbook has emerged.
You Can Have Your Cake and Eat It Too
The original promise of HLS and HCLs was never about replacing Verilog for the sake of it. It was about building chips faster: by increasing designer productivity, reuse, and scalability. These benefits came with tradeoffs like worse PPA and painful EDA integration. As they say, there are no free lunches.
Generative AI changes this equation in a fundamental way.
With GenAI-assisted Verilog, you get many of the same benefits that HLS and HCLs aimed to provide, without abandoning the Verilog ecosystem. The productivity gains are the most obvious. Writing boilerplate RTL, parameterizing modules, refactoring interfaces, or instantiating complex microarchitectural patterns are all tasks that LLMs handle remarkably well. What once justified a higher-level language now often collapses into a single prompt. You still end up with Verilog, but you get there faster, with fewer errors, and with much less cognitive overhead.
More interestingly, GenAI quietly revives one of the most compelling arguments for HLS: the “shift left.” While this shift was long promised, HLS never reliably delivered on it. PPA was difficult to estimate accurately at the HLS abstraction, and those estimation errors were often more costly than starting from RTL in the first place.
Generative AI flips this dynamic entirely. Instead of postponing RTL, it accelerates its creation. High-level specifications, performance targets, and even informal design intent can now be translated directly into functional RTL models early in the design cycle. Hardware and software can evolve in parallel, not because RTL is avoided, but because it is cheap to generate, modify, and discard. In other words, GenAI enables a shift left, without a shift away from Verilog.
Why Verilog Was Ready for This Moment
Verilog didn’t reinvent itself for the GenAI era. The GenAI era quietly reinvented itself around Verilog.
First, Verilog sits at the center of the EDA ecosystem. Decades of tool development have gone into taking Verilog as input and squeezing out the best possible PPA. Synthesis, place-and-route, timing closure, power analysis: these flows are deeply optimized for RTL written in Verilog and SystemVerilog. The switching costs are enormous.
Second, in the already limited chip design data available to train large language models, Verilog dominates. This creates a powerful flywheel effect. Verilog is used to train LLMs, which then generate more Verilog, further reinforcing its position as the language of digital design.
Today, we are seeing a combination of these two factors, in the form of closed-loop RTL design agents. Verilog code is both a cycle-accurate, and formally checkable artifact, which can be used to generate accurate reward signals to improve the AI. (I’m not going into details of these systems here, but all signs are pointing towards such RL environments becoming the future.) The implication of this is that GenAI won’t just lead to more Verilog, it will lead to better Verilog. Over time, this creates a compounding effect - Verilog will become the most optimized way to design chips, even if that Verilog is generated by an AI agent.
In the world I just described, HCLs and HLS tools seem irrelevant. If Verilog is cheap to generate, easy to iterate on, and continuously optimized by feedback, the incentive to move away from it fades. Verilog survived long enough to see the light at the end of the tunnel. Now, the future looks brighter than ever.


Ok perfect, I never learned HLS and I guess I never will. Long love Verilog!