Elon Musk’s so-called Department of Government Efficiency (DOGE) operates on a core underlying assumption: The United States should be run like a startup. To this point, that has largely meant chaotic firings and an eagerness to steamroll laws. However no pitch deck in 2025 is full with out an overdose of synthetic intelligence, and DOGE isn’t any totally different.
AI itself doesn’t reflexively deserve pitchforks. It has genuine uses and might create real efficiencies. It’s not inherently untoward to introduce AI right into a workflow, particularly in the event you’re conscious of and capable of handle round its limitations. It’s not clear, although, that DOGE has embraced any of that nuance. If in case you have a hammer, every little thing seems to be like a nail; in case you have probably the most entry to probably the most delicate information within the nation, every little thing seems to be like an enter.
Wherever DOGE has gone, AI has been in tow. Given the opacity of the group, lots stays unknown about how precisely it’s getting used and the place. However two revelations this week present simply how in depth—and probably misguided—DOGE’s AI aspirations are.
On the Division of Housing and City Improvement, a college undergrad has been tasked with using AI to search out the place HUD laws could transcend the strictest interpretation of underlying legal guidelines. (Companies have historically had broad interpretive authority when laws is imprecise, though the Supreme Court docket recently shifted that power to the judicial branch.) It is a job that really makes some sense for AI, which may synthesize info from giant paperwork far sooner than a human might. There’s some threat of hallucination—extra particularly, of the mannequin spitting out citations that don’t actually exist—however a human must approve these suggestions regardless. That is, on one stage, what generative AI is definitely fairly good at proper now: doing tedious work in a scientific approach.
There’s one thing pernicious, although, in asking an AI mannequin to assist dismantle the executive state. (Past the actual fact of it; your mileage will differ there relying on whether or not you assume low-income housing is a societal good otherwise you’re extra of a Not in Any Yard sort.) AI doesn’t really “know” something about laws or whether or not or not they comport with the strictest attainable studying of statutes, one thing that even extremely skilled legal professionals will disagree on. It must be fed a immediate detailing what to search for, which suggests you cannot solely work the refs however write the rulebook for them. It’s also exceptionally wanting to please, to the purpose that it will confidently make stuff up fairly than decline to reply.
If nothing else, it’s the shortest path to a maximalist gutting of a significant company’s authority, with the prospect of scattered bullshit thrown in for good measure.
Not less than it’s an comprehensible use case. The identical can’t be stated for an additional AI effort related to DOGE. As WIRED reported Friday, an early DOGE recruiter is as soon as once more on the lookout for engineers, this time to “design benchmarks and deploy AI brokers throughout reside workflows in federal companies.” His goal is to eradicate tens of 1000’s of presidency positions, changing them with agentic AI and “releasing up” employees for ostensibly “increased influence” duties.
















































