ENIAC and the Workload Problem - Part 1
Why studying ENIAC Matters - Especially now.
Disclaimer: Opinions shared in this, and all my posts are mine, and mine alone. They do not reflect the views of my employer(s) and are not investment advice.
This is the first post in a series aimed at understanding the ENIAC architecture and the lessons that remain relevant in today’s computing landscape. In this post, I introduce the ENIAC story and explain why it still matters.
The First Workload Problem
On December 7, 1941, the attack on Pearl Harbor became a key moment in computing history. The US, having now entered World War II, commissioned the Army’s Ballistic Research Laboratory (BRL) with a project: To improve the accuracy of artillery strikes. This effort led to the creation of artillery firing tables: one of the first large computing workloads.
When I was in school, some of our mathematics and physics exams did not allow the use of calculators. Instead, when complex arithmetic problems were involved, we were allowed to bring in something called log tables: collections of precalculated values used to solve complex arithmetic problems in place of a calculator. (I never understood the point then, but maybe it was all for this moment.) An artillery firing table was essentially the Army version of a log table. Instead of telling you what 10.14 ÷ 2.38 equals, it turns complex ballistic calculations into simple lookups that a gun crew can use to quickly determine the correct elevation or angle before firing at a target.
Unlike arithmetic, which does not change with time, firing tables were different based on the gun type, barrel wear, and environmental conditions. To generate the data for each of these situations, ballistic trajectories had to be solved numerically. Human calculators could not produce these tables with the speed and accuracy the war demanded. BRL used a Differential Analyzer, the state-of-the-art computer of the time, to run these calculations. They brought down the computation time from 3 days to just 15 minutes per trajectory. This was still too slow, and the results were often inaccurate, requiring human verification. The most important computational workload of that time was still waiting for a faster machine.
In 1943, the Army approved a proposal from J. Presper Eckert and John Mauchly at the University of Pennsylvania to build a machine with a completely new architecture. This computer, called the Electronic Numerical Integrator and Computer (ENIAC), had one goal: calculate firing tables at an exponentially faster rate to help with the US’s World War II defense. On September 2, 1945, just two years after the ENIAC project was approved, Japan surrendered, and this officially marked the end of World War II. ENIAC served its purpose and has its place in history as one of the most influential computers.
Well… that’s not quite how the story goes. Sure, ENIAC was commissioned to calculate firing tables, and its architecture actually improved the speed of trajectory calculations, reducing the time taken from 15 minutes on the Differential Analyzer to about 30 seconds. ENIAC also made giant strides to improve the setup time and accuracy when running ballistic calculations. But one detail surprised me: ENIAC was announced complete in February 1946. This means ENIAC was never used to compute a single entry in the artillery firing tables used during World War II. From 1943 to 1945, ENIAC experienced major delays, and also went significantly over its initial budget. ENIAC also consumed on the order of 150-200 kW of power, and sometimes contributed to power outages in the Philadelphia area. When the war ended, ENIAC was just an expensive infrastructure investment whose primary workload was no longer important.
Why ENIAC Still Matters
Despite a lack of demand for ballistic calculations, the ENIAC project continued after the war. In fact, this is where the ENIAC story really starts, and was my motivation to write this post.
ENIAC has lived through what I would call an “interesting architectural life.” It was rushed into existence to handle one specific workload. But that’s not why we remember ENIAC today. The ENIAC architecture is widely regarded as the first modern computing architecture. It was one of the first computers capable of running a program. ENIAC went on to effectively run weather models, nuclear simulations, and scientific computations. These workloads fell well outside the scope of military applications that it was designed for. At one point, the demand for ENIAC was so high, that renting ENIAC for two days cost as much as buying a new car.
This period of immense success also came with financial and political pressures on ENIAC to live up to the tag of a “general purpose computer.” That’s when the wheels started to come off, and the architectural deficiencies started to become evident. Eventually, it became clear that the ENIAC architecture could not be repurposed anymore to handle emerging workloads, and it had to be retired. The computing industry is less than 100 years old. It sounds like a lot of time, but it takes a long time for a new architecture to get adopted, and even longer for it to completely be abandoned. The ENIAC story is one of the few stories that has come full-circle, and has also been well documented. This presents a rare opportunity to analyze decisions that worked, and more importantly, decisions that failed.
There is another reason why ENIAC matters. Fundamentally, ENIAC looks nothing like modern computers. Many aspects of ENIAC are no longer directly relevant today, starting from the fact that ENIAC’s fundamental building block is a vacuum tube, not a transistor. However, since ENIAC essentially gave birth to modern computing, the ENIAC computer is simple enough (by today’s standards) that we can try to understand all aspects of the design. This includes the hardware architecture, software decisions, infrastructure challenges, supply chain issues, and interpersonal dynamics that were involved in the design. These details, in addition to making for a great story, are important to explain both the good and bad decisions that shaped ENIAC. In my opinion, attempting the same exercise with a modern computer, while more relevant, is almost impossible due to the sheer complexity involved.
This is why ENIAC is worth your time. But why now?
The New Infrastructure Buildout
Studying ENIAC is not just an academic exercise. Today, we are racing to build advanced computers that fill warehouses. They consume gigawatts of power, cost tens of billions of dollars to build, and are optimized for a single dominant workload: large-scale LLM training. This is not very different from how ENIAC began.
There are a lot of debates happening around the world about whether this investment is justified. I don’t see a lot of value in these debates because I think we are well past that point - today this infrastructure buildout is now inevitable. But if there is one thing that the ENIAC story tells us, it is this: the workload won’t remain the same. When projects attracts large investment, especially from traditionally “non-tech” sources like sovereign funds, it is expected to outlive its first workload. Cities do not build large bridges merely to connect two houses on opposite sides of a river; these projects are funded to enable large-scale economic growth in multiple different ways.
Language models are the first dominant workload of today’s compute infrastructure, and they are expected to keep this it fully utilized in the near term. But I am certain we will see new workloads soon. It could be as simple as a different approach to advance AI. Or it could be something well beyond AI - astrophysics, quantum algorithms, genomics. Regardless of which buzzword you choose to believe in, two important questions emerge:
Which compute infrastructure can last the longest before we need a clean reset?
How effectively can we map future workloads onto the infrastructure we are building today?
By studying ENIAC, I hope to build a mental model to help answer these questions. As Nvidia CEO Jensen Huang put it, even if you are not the first one to catch the metaphorical apple that falls from the tree, this mental model can help you become the first person to pick up the apple from the ground.
That concludes Part 1 of this series. If you like what I just said, follow along for upcoming posts in this series, starting with a deep dive on the ENIAC architecture.
Sources:
My primary reference for this study is the book “ENIAC in Action” which goes into an impressive level of detail about many aspects of this computer.
If you know any other resources to help with this project, leave a comment!


One of the first things I noticed at my tour of UPenn was the ENIAC display in the Moore building. Really looking forward to the next parts!