Three Rules for AI Integration
From Buzzword to Combat Power
In my last post, I argued that the Army’s problems with AI start with the plumbing: radios, networks, and systems that can’t reliably move information at the speed of relevance. This time I want to shift from problems to principles. Here are the rules I use to judge whether an AI project is worth pursuing. They’re not exhaustive, but they help separate the projects with a fighting chance from those destined to burn time and money.
Build for the Boots, Not the Brass — design tools that junior Soldiers can actually use.
Spin the Flywheel — ship a minimally useful product, collect data, then improve it.
Borrow Boldly — adapt proven tools instead of trying to outspend (or outsmart) Big Tech.
These rules come from both study and practice. At Carnegie Mellon, I studied how to take AI out of the lab and into production. At the Army’s AI Integration Center, I applied those lessons building CamoGPT, a tool that grew from a one-Soldier side project to tens of thousands of daily users. Along the way, I saw plenty of “AI ideas” that sounded promising but proved to be dead ends — and others that quietly took root and scaled.
Rule One: Build for the Boots, Not the Brass
Software is an information good: expensive to produce the first time, cheap to reproduce a million times. Once built, a tool can be put in the hands of every Soldier at essentially no extra cost. Even better, the more people who use it, the more valuable it becomes. A phone is useless alone; indispensable when everyone has one. The same is true for battlefield software.
The Army often violates this logic. We spend huge sums building bespoke systems aimed at flag officers, while the people actually producing and using the data limp along with spreadsheets and workarounds. Call it Dashboard Disease.
A 2019 article on the “Army Leader Dashboard” bragged about unifying 700 databases to instantly answer questions like “how many tanks do we have?” In explaining the challenge of such a seemingly simple query, they almost asked the right question: “Sometimes the single, authoritative source for one type of data is a spreadsheet on a supply sergeant’s desk. So how might senior leaders access that information when it is needed?”
But notice the pivot from the sergeant’s struggle to the general’s snapshot. Why is the sergeant hand-curating a local file in the first place? Because the official system, GCSS-Army, is so clunky and over-gated that only two people in my company can even log in. Of course she uses Excel — the “enterprise” system practically forces her to1.
The lesson is simple: solving the general’s problem doesn’t solve the sergeant’s. But solve the sergeant’s problem with a system she can actually use, and the data she enters will flow upward — the general’s problem solves itself.
Rule Two: Spin the Flywheel
In tactics, we’re taught to “make contact with the smallest element first.” The point is to probe, learn, and retain freedom to maneuver before committing the full force. AI integration should work the same way.
Machine learning products improve through use. Industry calls this the flywheel effect. A minimally useful product generates data. That data trains better models. Better models attract more use. And more use generates more data.
You’ve seen this before. Early Facebook photo tagging was manual — you clicked a face, typed a name. That created a dataset. The next version suggested names, which you confirmed or denied. That created a bigger dataset. Eventually, the system tagged faces automatically. What began as a crude tool spun into something seamless. The takeaway is clear: don’t just innovate — iterate.

The military rarely follows this pattern. Instead, we try to build massive, “finished” systems in a vacuum and then push them onto users. Adoption is low. Data is bad. The wheel never spins.
There’s another way. In my last post, I described an AI agent inside Maven that scraped chat logs for grid coordinates and plotted them on a map — a bit of modern fieldcraft, built to solve a local problem.
Now imagine scaling that idea across the enterprise. Start quietly: the AI makes guesses in the background, invisible to users. Those guesses are compared against what staff actually plot, producing a dataset. The first visible version suggests plots, which staff accept, reject, or correct. Each interaction becomes more data, and the wheel accelerates. Over time, the system gets better — because Soldiers are using it.
This is how industry ships software. The military should do the same.
Rule Three: Borrow Boldly
The Department of War is not going to outspend Silicon Valley on AI. Training a frontier model costs hundreds of millions. Top researchers are signing contracts that look more like NBA contracts than government GS scales. Even outside the elite tier, the military struggles to recruit and retain technical talent; tens of thousands of cyber billets sit unfilled.
This doesn’t mean we’re doomed. It means our lane is different. Instead of chasing moonshots, we should excel at integration: taking existing models, frameworks, and software, and adapting them to military problems. The opportunity is not invention, but recombination.
Training a new foundational model from scratch? Out of reach. But using lightweight fine-tuning to ground a commercial model in Army jargon and workflows? Achievable. Building a retrieval system so that an off-the-shelf model can answer questions with authoritative Army data? My team did that last year. It wasn’t always glamorous, but it worked. And that’s the kind of approach the Army can actually scale: cheap, fast, and adapted from what already exists.
The Prescription
The three rules add up to a simple mandate: stop reinventing the wheel, and start turning the flywheel.
The clearest yardstick is simple: use. If Soldiers aren’t logging in daily, the system has already failed. Low adoption usually means one of three things: it’s not useful to the average Soldier in the first place (Rule 1), it’s not getting better as you go (Rule 2), or you’re so busy trying to invent (or reinvent) the wheel that you don’t have users to begin with (Rule 3).
There is one caveat. Some systems see use only because Soldiers are forced to use them. In those cases, the real metric should be feedback: frequent, automated, public. Without it, vendors have no incentive to improve, and program offices have no incentive to admit failure. Soldiers pay the price.
The Army doesn’t need more systems designed from the top down. It needs tools Soldiers want to use every day — systems that earn adoption because they make life easier, improve because they’re used, and spread because their value multiplies across the force. That’s how “AI-enabled decision advantage” becomes more than a buzzword — and starts to look like combat power.
This story is, of course, more complicated. The work by Program Executive Office for Enterprise Information Systems (PEO EIS) eventually produced the Army’s Vantage data platform, which has been a real step forward for data visibility — useful to leaders from company commanders like me up to the Chief of Staff. But Vantage still draws its data feeds from the same clunky systems my supply sergeant wrestles with. The general may now get data faster, but not more accurately, because the sergeant is still stuck with a system that’s unusable at worst, cumbersome at best.


This makes a lot of sense! The question is”who is the customer?” Is it the peak of the pyramid who pays the bill, or the tip of the spear, the soldier whose life may literally depend on reliable and efficient, coordinated systems? Great blog.
Seems like very reasonable advice! defense contractors often sell to the brass, which is a possible explanation for why these new tech initiatives don’t address Rule #1