
Preface. You may remember that a movement called “Code for America” founded in 2009, used technology to improve government services and make them more efficient and accessible. This post is a book review of Pahlka’s 2023 “Recoding America” about how they made government agencies more efficient and why it is so hard to do.
If only Trump had sent in Code for America to make government more efficient. DOGE did the exact opposite, a corrupt chainsaw – no, an atomic bomb — blowing up agencies by firing employees and dismantling them, making them far less efficient.
So far they have damaged: United States Agency for International Development, the Consumer Financial Protection Bureau (CFPB), Department of Education, Department of Health and Human Services (HHS) with plans to cut 20,000 positions (about 25% of the agency), Department of Agriculture (USDA), Social Security Administration (SSA) Internal Revenue Service (IRS): Issued reductions in force (RIFs), starting by significantly cutting the Office of Civil Rights and Compliance, General Services Administration (GSA) to close hundreds of federal buildings and office spaces, Department of Homeland Security (DHS) and Department of Housing and Urban Development (HUD) with many contracts and employees ended, National Oceanic and Atmospheric Administration, Federal Deposit Insurance Corporation, AmeriCorps, Environmental Protection Agency, Defense, CIA, Department of Energy, FEMA, Office for civil rights, Institute of Museum & Library services, Department of Justice, NASA, Ombudsmen, Interior, Labor, Transportation Security Administration, National Science Foundation, Small Business Administration, Transportation, National Endowment for the Humanities, National Security council, Veterans Affairs (Shao E 2025 The federal work force cuts so far, Agency by Agency. New York Times).
And DOGE has harmed you too. The Consumer Financial Protection Bureau (CFPB) has returned $21 billion to 205 million Americans plus much more by curbing illegal and predatory practices by banks, mortgage lenders, credit card companies, debt collectors and more. Who knows how much more you stand to lose – your life even with cuts to health agencies.
Alice Friedemann www.energyskeptic.com Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Women in ecology Podcasts: WGBH, UCSC, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity, Index of best energyskeptic posts
***

Pahlka J (2023) Recoding America Why government is failing in the digital age and how we can do better
Thanks to Proposition 64, also known as the Adult Use of Marijuana Act, marijuana use would no longer mean prison, as it had meant for so many Californians for so long. Drug-related arrests had been a major source of the growth in the state’s prison population, which now numbered around 130,000. Reducing that number was a big hope of the law’s proponents, but current incarcerations were only part of California’s problem. There were also a staggering eight million Californians living with criminal records from prior offenses. Many of those people now carried felony convictions for an act the state no longer even considered illegal.
Having a felony on your record can make it very hard to get a job. Most employers won’t even consider you, and fields that require any kind of occupational licensing, from medical assistance to cosmetology, become off-limits. Former felons also can’t join the military. And it’s not just employment. With a felony record, it’s hard to rent an apartment or get a home loan. Veterans with felony convictions are denied certain retirement benefits. Students with drug felonies can’t deduct their tuition from their taxes, the way others can.
Even minor pleasures like volunteering at your kid’s school are off the table. Researchers have identified thousands of “collateral consequences” of having a criminal record. That’s a lot of barriers to building a stable life, and it seems especially unfair if you’re someone who did time for something that’s no longer even a crime.
REMOVING CONVICTIONS FOR MARIJUANA
Prop 64 made past marijuana offenses eligible for expungement from people’s criminal records. Once expunged, those convictions wouldn’t show up on the background checks that kept these folks out of most jobs and limited their activities. But while this sounds like a simple matter of changing a field in a database, it was far more complicated in practice. The information needed for expungement could span court records, records from the arresting agency (which might be local, state, or federal police), jail and prison files, prosecutors’ records, and probation records.
If you had a conviction in more than one county, you would need to get all of the relevant materials in all the applicable jurisdictions. From those documents, you’d need to identify the docket numbers from your cases, then figure out which forms to complete and where to find them. You’d need to draft motions for each case and attach copies of the verified criminal history to each motion. You’d need to find out how many copies to make of these documents, where to send them, and how to pay the associated fees.
Prop 64 made hundreds of thousands of people statewide eligible to have their felony convictions expunged. But given the marathon of logistics, hardly anyone even tried to start the process.
A year after the passage of the law, the San Francisco district attorney’s office had a whopping 23 petitions to seal marijuana-related records. Not even 23 expungements: 23e people who’d gotten as far as filing the initial paperwork.
HEALTH CARE
The US spends more than twice as much per capita on health care as most other developed countries, and Medicare is a big part of that discrepancy. It would make sense to instead pay doctors more for better patient care and better outcomes, an approach known as value-based care. In 2015, Congress passed a major overhaul of Medicare meant to move the program in that direction. But for the program to succeed, the people who provided that care, from doctors and nurses to medical office administrators, would need to trust it. And as the CMS team found, among the many feelings that providers had about the new law, trust was not one. Fear, anxiety, and even anger, yes. But not trust.
Another doctor put it this way: “The federal government wants me to spend over $50,000 for an electronic health records system that doesn’t work, takes more time for me and my staff, [involves] voluminous incoherent regulations.… It seems like the best course for solo practitioners that are over fifty-five is to either retire, close the office … or just collapse the practice and lay off most of the staff and only see private pay patients.
Instead of improving care, the law risked driving doctors to stop taking Medicare patients entirely, thereby restricting patients’ options and access.
COVID RELIEF
Millions of Americans were grateful for Congress’s willingness to act in this time of crisis. But many of those same millions heard the news that relief was coming their way and then waited. And waited. And waited. By April 2020, about half of households had received their stimulus checks. By May, a survey of food-assistance recipients found that only 15 percent had received the pandemic supplement. By June, only half the states had started delivering the food benefits for school-age children.
By July, the IRS had caught up significantly on stimulus checks, but an estimated five to ten million households were still waiting, and as many as six million remained blocked from filing taxes (and claiming critical tax credits) due to erratic behavior of the IRS’s online tools. By August, with a moratorium on evictions about to be struck down, only 11 percent of the $46 billion authorized for emergency rental assistance had been distributed.
COVID UNEMPLOYMENT INSURANCE
But the program that garnered the most ire was unemployment insurance, perhaps because it was well established before the pandemic and therefore expected to have the benefit of existing operations. By September 2020, millions of people who had applied for unemployment benefits in March, when the first shelter-in-place orders came down, were still waiting to hear back. Some didn’t hear anything until well into the following year, while others who did get a response got stuck in a bureaucratic nightmare of confusing messages, requests for further documentation, and call centers that seemed never to actually answer calls. Suddenly, lawmakers in many states were reminded that their state-run systems for administering unemployment benefits were sorely out of date and called for immediate modernization. But twenty-two states had already “modernized”—meaning they had moved from decades-old mainframe computers to the cloud—and they fared no better.
The US Department of Labor is still trying to sort out today exactly what went wrong and what might be done to avoid these meltdowns in the future.
CODE FOR AMERICA
Haunted by the sense that government’s struggles to participate in the digital revolution spelled real trouble for an already weakened institution and threatened already declining faith in both government and democracy, I had founded an organization called Code for America, which enlisted technologists to work with local governments. We’d had some successes as a fledgling nonprofit, enough to be growing fast. But my colleagues and I also spent five years realizing how much bigger the problem was than we had known.
I left DC having laid the foundation for a new unit within the White House called the United States Digital Service (USDS), which helps federal agencies make better use of technology. When I returned to Code for America, one of the goals we set was to bring the magic words of decriminalization laws to life through a project called Clear My Record. By helping dozens of state and local governments translate legislation into changes in databases, it has helped millions of people shed the burden of their outdated convictions.
BILLS ARE MADE INTO LAW DESPITE GRIDLOCK
despite increasing partisan gridlock, plenty of those bills have made their way into law. The 116th US Congress, which met during the last two years of Donald Trump’s presidency—when Democrats held the House and Republicans held the Senate—enacted 344 pieces of legislation (1,229, if you count those enacted by incorporation into other bills), including the largest economic stimulus package in US history. Add to that the legislative output of 50 states and over thirty-five thousand local governments, plus propositions and other measures that in 24 states can become law through the collection of signatures and passage at the ballot box. Add to that executive orders and their state and local government equivalents, plus countless policies, regulations, and rules enacted by administrative agencies and other bodies under the authority of Congress in accordance with existing law. Even when Congress is supposedly incapable of anything, there’s quite a lot of lawmaking going on.
Each of these documents is transformed the moment it is enacted, becoming more than just words on a page: backed by the weight of our representative democracy, it now compels action. Someone (actually, lots of someones) must figure out how to implement and enforce it. Once regular words become magic words, someone has to write more words about how this new rule will work, and someone else has to make or amend forms that people can use to interact with the new rule. These forms become records, which have their own magic. It says here that you are married.
In the past, this all happened on paper. Stacks of paper with words and numbers and official seals got you things, like money or food stamps or even freedom. Now, though, those records are rarely magic without a digital element. Today, if someone tells you, “It says here you are a felon,” it’s usually a screen they’re looking at. Conversely, if the law changes and the act that made you a felon is no longer a crime, nothing is really different for you until the database is updated. The magic of law is now inextricably tied to the bits and bytes of computer code.
the implementation of laws has become anything but easier. The famous slowness of bureaucracy is a key reason, but all too frequently, what now widens the gap between policy intentions and actual outcomes is the messy task of implementation through digital technology, and the ways government makes working with that technology uniquely complex. It has gotten so difficult to deliver on the promise of legislation that the magic words are losing their magic.
THERE IS PLENTY of blame to go around when policies fail. Intentional sabotage by opponents of a policy does happen, and there is a rich history of interest groups lobbying for administrative burdens that purposely make government services harder to use.
Seeing the effects of this gap, public- and private-sector leaders, advocates, and pundits have come up with a host of solutions. “Government needs to spend more on technology,” they say. Or “government technology needs to be modernized.” Or “government should leave technology and service delivery to private-sector vendors.” Or “government technology should be subject to stricter oversight.” But the reality of how and why government is failing to deliver on policy promises in the digital age is much more nuanced than these quick fixes suggest.
More money can be useful, of course, and is often necessary—but having big budgets from the start can be deadly, since they often require an entire megaproject to be planned up front, reducing the ability of the team to learn as it goes.
More outsourcing can help with some aspects of service delivery, and contractors are a valuable piece of the implementation puzzle—but government can work well with these critical players only when it can bring its own basic digital competency to bear as well.
Of course neutral third parties should serve as a check on any project paid for with taxpayer dollars. But it’s already common for government technology teams to report to six, seven, eight, or even more separate oversight bodies, and that’s before they get flagged for an investigation by an agency inspector general, audited by the Government Accountability Office, or called before a congressional committee (or the state or local equivalent of any of these).
But the bigger problem is that all that oversight hijacks the time and attention of the teams supposedly delivering the product or service. When all your time is spent answering questions and writing reports for other people inside government, it’s mighty hard to be focused on the people outside government you’re supposed to serve.
These dysfunctions derive from core issues that are human rather than technological, complex rather than just complicated. Chief among them is a structure and way of thinking deeply rooted in American culture: hierarchy. In today’s world, making our laws and policies into reality will almost always involve some sort of digital technology, but what’s valued in government isn’t the nuts and bolts of implementation but the rarefied work of policymaking. Digital work, which in our larger society commands so much attention (whether it’s lionized or vilified), in government is reduced to an afterthought. It’s not what important people do, and important people don’t do it. They hand it off to people many rungs down the ladder, or to companies hired to do it for them. At times it almost seems that status in government is dependent on how distant one can be from the implementation of policy.
in government, countless bureaucratic processes and procedures—most notably, lengthy and burdensome procurement requirements—have had the opposite effect, putting ever more layers between the people creating the services and those who use them.
The temporal, organizational, structural, and cultural gaps between policy and tech teams, and between tech teams and the users of that tech, make it hard to try out strategies, learn what works, resolve ambiguities, and readjust. Instead of active collaboration and co-learning, implementing government policy through digital technology resembles a game of telephone, in which each party in sequence fumbles the translation a bit until, many stakeholders later, the message is mangled beyond recognition.
Even the process of getting a construction permit, registering a vehicle, or just filing taxes can erode faith in our system of government. We can’t afford this downward spiral of poor service leading to alienation and decreased political participation, which in turn lead to poorer service. The implementation crisis threatens our democracy.
Because each successive leadership at an agency usually gets the budget or the mandate to deal only with the most pressing technology crises at hand, and because tech investments must always be pitched as adding some new capability to the system (rarely just renovating what already exists), each piece of the system gets built in different technology paradigms from different eras. But every new piece depends on everything that came before, so each successive layer is constrained by the limitations of the earlier technologies. The system is not so much updated as it is tacked on to. Over time, new functionality is added, but the system never sheds the core limitations of the foundational technologies. At the same time, it becomes enormously complex and fragile.
Updates require caution, as any change in one layer can have unforeseen consequences in the others. It becomes harder and harder to support the technologies in the lower, older layers, while the more recent layers require constant updates and patches.
Several of us had spent time at the US Department of Veterans Affairs, where the technology layers date back to the sixties and even earlier. The architecture diagram for just one benefit system at the VA was so complex that it was displayed on a wall twenty feet long and eight feet tall. Even at that size, many of the elements were printed in a font so small you had to be right up next to the wall to read them. When I first saw it, I found myself paralyzed. Moments like that served to remind me that when people talk about government technology and ask “how hard could it be?” the answer is almost always: really, really hard. We think that the technology that runs our government has been designed to perform specific functions. In fact, it has merely accreted
EDD
There were decades’ worth of layers at the EDD, too, and they had started cracking even before the stresses of the pandemic. As we dug into them, we started to think of the hodgepodge of technology not only as layers of paint but as the layers of sediment you might excavate in an archaeological dig.
The oldest layer of technology at the EDD is something called the Single Client Database, running on an IBM mainframe.5 Its exact origins are murky. Some of this layer is written in COBOL, a programming language invented in 1959, but the EDD probably began using the system in the 1980s. The Single Client Database was designed for green-screen terminals, the monochrome displays with green text on a black background
Today, it’s mostly operated with green-screen emulators, which reproduce the same display on the kinds of PC you can pick up at your local Office Depot.
In the 1990s, resourceful claims processors at the EDD took advantage of those Windows desktop PCs running the green-screen emulators. They started writing scripts to automate some of the most repetitive routines they had to do every day. These scripts became known as macros,
If a claims processor frequently needed to reassign a work item to a new queue, for instance, and it took twenty different small actions to accomplish that on the green-screen interface, a macro would let them do it with just a few keystrokes.
Outside of writing new COBOL code, macros are the only way that the EDD can automate significant new workloads in this layer. Since there are fewer and fewer COBOL programmers around (and they are consequently more and more expensive to hire, if you can even get them), the EDD’s own staff wrote a lot of these macros, to the delight of their fellow overworked claims processors, who appreciated any shortcut.
Eventually, a few of the macro writers learned new computer languages like Visual Basic, C#, and the .Net programming environment, which gave them the ability to automate more complex tasks and run larger batches. Today, the green-screen emulator used by an EDD claims processor has several rows of brightly colored buttons that provide access to dozens of these macros. Without them, claims processing would take even more time
But the addition of these many macros also means that to maintain and update the EDD system, you need people who know not only Visual Basic and C# and so on but also how all these particular macros operate.
The only place someone can learn that is by working at the EDD. And given how chaotically this layer of technology developed over several decades—with new macros written every time the department needed to implement a new workflow or respond to some change in the rules or regulations—it can take many years on the job to get up to speed.
Seeking to “modernize,” it added three major new systems, largely through contracting with the consulting firm Deloitte. The first of them, California Unemployment Benefits System (CUBS), provides another way of accessing the Single Client Database, the one that runs on the IBM mainframe from the 1980s. CUBS allows claims processors to do many of the same maintenance tasks that were done through macros or directly on the mainframe in a web browser window instead. The second system, Benefit Claim Information System (BCIS), has a very similar web interface, but it was contracted for separately and uses its own database. It was developed to manage “recomps,” or recomputations of benefits, which aren’t quite what they sound like: claims usually wind up in there because they got flagged for manual identity verification when they first came in and therefore never made it through to the system that would compute a benefit in the first place.
The third system from this decade is UIOnline, a public-facing website that replaced eApply4UI and is now what most people use to apply for benefits.
With this grab bag of somewhat connected, somewhat separate systems in mind, let’s return to our challenge of counting backlogged applications. Since CUBS and BCIS have separate data stores, a single applicant might have a record in both. Each of the two systems also has its own numerous “work queues” that list the tasks, or “work items,” requiring human action to process an application. (CUBS alone has 153 different work queues.) Specific groups of workers at the EDD are granted the ability to handle specific kinds of work items, based on their levels of training. But because the work queues and types of tasks are not the same in the two applications, understanding how CUBS works doesn’t necessarily help you master BCIS, and the definition of a backlogged application will be different in CUBS than it is in BCIS.
One of the effects of having IT systems built in archaeological layers is that workers can often perform the same action via different layers, meaning that there are multiple ways to accomplish a given task. Over the years, as different teams take different approaches, knowledge and habits become fragmented.
If one team encounters a problem and uses some particular piece of the system—the browser-based CUBS, or the macros on the green-screen emulator, or direct access to the mainframe—to work around it, another team may not know how to run a report that accounts for that work-around and will inadvertently include the wrong records and exclude the right ones in its accounting.
In order to count how many backlogged unemployment applications there were, then, we would need to look not just at one system but separately at quite a few. We would need to make a count in each one individually and then figure out how they overlapped in order to deduplicate the combined list. We would need very specialized knowledge of the meaning of hundreds of work queues to even start discussing how to define a backlogged application, never mind running the queries that would pull every record that met that definition from multiple incompatible databases.
The number of people with such specialized knowledge is very limited. In the midst of a pandemic that has forced millions out of work, spurring Congress to overhaul existing unemployment programs and add entirely new ones, all of which need to be programmed into these fragile and complex systems, the demands on these employees’ time are enormous.
There is no easy way to suddenly have more of these people. And there is no easy way to explain to angry constituents and the people who represent them—whose personal experience with seamless online services gives them certain expectations about how technology should work—why this task is so hard.
The crisis in unemployment insurance during the pandemic led to lots of questions about why these systems hadn’t been modernized in recent decades. Agency directors around the country blamed governors and state legislatures for not funding modernization. Legislators blamed their predecessors for inaction, vendors for their high price tags, and the agency directors for not putting forward credible modernization plans before the crisis hit. In fact, when Marina and I started our work, California’s EDD was theoretically just weeks away from awarding a modernization contract that it had been working on for ten years. With the EDD in the spotlight, the legislature paid particular attention to this pending contract and noted, with horror, that the modernization effort was projected to take another eleven years.
The legislators were understandably upset, but it’s odd that they seemed surprised. Their state had spent ten years and over $500 million on a system to connect the courts with a common document management system—and then scrapped the entire effort.
It is also working on a modernization of its financial information systems that started in 2009 and has grown in cost to over $1 billion. As of 2022, it remains unfinished. The EDD’s IT modernization timeline was not exactly an outlier.
the work of the claims processors is astonishingly specialized. The training manual is 800 pages long. And even reading through it does not give you the skills you need to do the job, since much of the knowledge involves work-arounds that are passed along from employee to employee but not consistently known across teams or even written down.
In another office, where she met with a group of experienced claims processors, Marina encountered an employee who kept calling himself “the new guy.” Finally, she asked him how long he’d worked at the EDD. “Seventeen years,” he replied. What he meant was that, compared with his colleagues with over twenty-five years of experience, he was still getting up to speed on how the system worked.
The most complicated part of the EDD’s process was the area called “recomps,” where claims went if they got flagged for review. California had been able to pay out $50 billion in benefits in just a few months because straightforward claims—the ones where the department didn’t see the need to verify the claimant’s identity, for example—were processed largely automatically, on the digital equivalent of an assembly line.
When we speak of “legacy systems” in government, it does not mean simply that they are old. It means that we are grappling with the legacy of decades of competing interests, power struggles, creative work-arounds, and make-dos that are opportune at the time but unmanageable in the long run.
But if they got pulled off that assembly line, they would eventually end up in recomps, where the most experienced claims processors essentially worked through them by hand. Once there, they could not be put back onto the assembly line. not just anyone can work in recomps.
There was no way to fast-track the growth of a claims processor—not legally (since specific regulations covered what employees at each of these levels could do) and not practically, because the policies, processes, and procedures to be mastered were so complex
And that was before the pandemic brought unprecedented levels of change to all the rules. Even the most experienced claims processors were struggling to get their heads around the new programs and regulations, not to mention the move to remote work. New employees didn’t stand a chance of being helpful. Yet at the direction of the governor’s office, the department had been on an astonishing hiring spree, adding over 5,00 new staff since the beginning of the pandemic, mostly through an ever-ballooning contract with Deloitte. The legislature had pretty much opened up the wallet to the EDD to spend whatever it needed to clear the backlog.
The governor referred to them in press conferences as proof that relief was coming.
More people equals more productivity equals fewer backlogged claims, the logic went. But what the politicians didn’t know was that these brand-new employees, with their days or weeks of training by Zoom, could not do the work that would reduce the backlog. Only highly experienced claims processors could do that. In fact, new employees could do almost none of the work assigned to them.
As Marina watched Carl scroll through the emails from the four hundred new hires in his group, she wondered who was training, and responding to, the other forty-six hundred new employees
The answer was grim: pretty much every staff member with tenure and knowledge of how the arcane and complex systems worked. They all had inboxes full of requests from new employees, representing hundreds of hours of work just to read and respond to them.
But if the people who could help with it, the old-timers, were busy training the new employees, who was processing the claims?
it now took two to five times as long to do recomps as it had in January, before the pandemic. It was right there in black and white: every new person the EDD hired made it slower—not faster—to get a benefit check to an unemployed Californian.
The backlog was growing unbounded not because of the bespoke and antiquated technology. It was growing because the policy and processes that govern unemployment insurance take seventeen years to learn.
THE POLITICIANS WHO jeer when a government agency like the EDD says that it will need eleven years to modernize its systems do not understand the nature of the technology in question.
But they also don’t understand that those archaeological tech layers are an expression of archaeological policy layers. The tech gets complex because the program and the policy governing it are complex. And like the tech, the policy is complex in part because it always accrues but is rarely reduced or reconciled.
there is no binder with those regulations—there is a wall of binders containing steady streams of correspondence from federal and state agencies going back to the 1935 Social Security Act, a mess of rules and oversight mechanisms that reference earlier rules and oversight mechanisms that reference even earlier ones. They frequently cross federal, state, and local jurisdictions and are subject to direction and influence from the executive, legislative, and judicial branches of each of those levels of government. These rules can grow to epic proportions.
The law governing unemployment insurance regulations isn’t quite as voluminous as the 73,000 pages of statutes and regulations at the IRS. At the federal level it’s fairly simple. But that doesn’t mean that agencies that administer the benefit have any easier a task than the IRS. To start, though the statutes are minimal, the regulations that have been written to guide their implementation are not. One state labor commissioner testifying in a hearing, as tense as the ones I watched in California, brought a cardboard box holding over seven thousand pages of federal regulations
Unemployment insurance is fundamentally run by the states. Each of the fifty states plus DC, Puerto Rico, and the US Virgin Islands passes its own laws and regulations governing eligibility, oversight, benefit amounts, adjudication in the event of conflict, and dozens of other elements. That means that each state program is unique in both its policy and its operations, and is controlled by a mixture of federal regulations (which are somehow supposed to apply equally to all fifty-three different programs), state law set by the individual legislatures, and policy set by the individual state departments of labor (which often interpret guidance from the federal DOL in different ways).
They are accountable to the federal Department of Labor (DOL), whose staff is trying to interpret and apply direction from Congress, but they also report to governors, get written up by state legislative analysts, and are publicly admonished by state legislators. If a state labor agency runs afoul of federal regulations, it risks losing federal funding, which would devastate its state’s finances, but the direction from the feds doesn’t always harmonize with state leaders’ goals and desires.
Federalism also makes the job of the DOL, which has to regulate each state labor agency to ensure compliance with federal law, exceptionally difficult. And when crisis hits, Congress looks to the DOL to help all fifty-three systems deliver the necessary relief and economic stimulus. It’s a bit like trying to wrangle fifty-three IRSes.
Consider the most basic issue of coverage: whether a given worker is eligible to claim unemployment benefits. According to the DOL, to answer that question you must figure out, among other things, whether the worker’s “employing unit” qualifies as an “employer.” Under the Federal Unemployment Tax Act, that term originally applied to “employing units” that, “during any calendar quarter in the current or immediately preceding calendar year, paid wages of $1,500 or more” and to “employing units of eight or more workers on at least one day in each of 20 different weeks in a calendar year.”7 In 1956, that threshold was changed to four or more workers; in 1972, it dropped to just one worker. Today, about half the states use this federal definition, but the others strike out on their own. In Montana, the minimum payroll to qualify as an employer is $1,000 in the current or preceding year; in New York, it’s $300 in a single quarter; in Iowa, any wages at all paid in the current or preceding quarter will qualify.
The unemployment department in any given state would have had to update its systems as the federal definition changed, as their own state definition changed, or as their labor agency switched back and forth between the federal and state definitions. Few updates succeed in catching all past references to the former rule, so you will find artifacts of previous regulations strewn throughout documentation and code.
Public servants must frequently decide which of several incompatible regulations is more important to comply with. The conflict might be between federal, state, and local regulations or between older policies and new ones that overlap with them without clearly removing the previous rules. The answer is not likely to be determined by what’s best for the people using the service. Rather, it’s usually a combination of whatever seems most convenient (such as not updating an already fragile system at a time of crisis) and whatever seems like it will result in fewest lawsuits (advocates frequently sue government agencies over equity issues).
Lawmakers often have good intentions, but they continually add policy layers with too little understanding of (and, sometimes, regard for) how what they add will interact with the layers that are already cluttering the delivery environment. That’s why a department like the EDD ends up with an 800-page training manual and a seventeen-year journey to becoming a competent but still inexpert claims processor.
After the last recession, Congress put $7 billion toward modernizing state unemployment systems. Many state legislatures opened their pocketbooks too. But tech modernization, at least the way it’s usually done today, has not helped much. Between 2000 and 2019, 22 states claimed to have “modernized” their unemployment systems in one fashion or another. When the pandemic hit, those states should have been the bright spots in an otherwise bleak landscape of claims backlogs. They weren’t. Some states still using mainframes, like New Jersey and Rhode Island, paid benefits much faster than others like Florida, which had ostensibly modernized.
The problem is that the modernization projects all sought to “add functionality”—more layers of paint—or just to move to more modern infrastructure, particularly to the cloud. None of them targeted serving clients better or scaling to meet demand.
If the people running the systems had set scale as a goal and truly analyzed their bottlenecks, they would have recognized the need to rationalize and simplify the accumulated layers of policy and process along with bringing in new technology. To be fair, none of them felt they had permission to.
During the pandemic, California not only received federal funds to help with emergency operating costs but spent hundreds of millions more from state coffers. Much of it went to hiring staff who inadvertently slowed the process and to emergency contracts with technology vendors like Deloitte, which built the ineffectual systems in the first place.
Such spending does nothing to help the EDD meet the next surge in need. A system can perform only at the speed of its slowest bottleneck. The only way to ensure that a system can scale up is to address its chokepoints. Modernizing technology without rationalizing and simplifying the policy and processes it must support seldom works. Mostly, it results in much the same mess you had before, only now in the cloud.
Many a tech modernization has gotten rid of the mainframes, the COBOL, and the green screens (or hidden them better) but has left the frontline workers just as confused and overburdened as before. Sometimes more so, because now they have to learn a new but still staggeringly complex system.
Lawmakers were furious at state-level bureaucrats for their failures during the pandemic, but it’s the lawmakers who have insisted on petty provisions like docking a claimant’s benefits because the person had a cold one day. You can have systems that do every possible thing policy makers can think of to ensure “program integrity” (in other words, making sure no one is getting a dollar more than they should) or you can have systems that scale. You can’t have both.
And most of what policymakers do to ensure program integrity ends up costing far more in administration than the program saves on paying out benefits.
The way government agencies build these systems is deeply flawed, but what’s equally flawed is what we ask them to build. We keep hoping that if we can just get off the mainframes and the COBOL, things will be better. I thought that, too, when I first started working with government. But today, I’d bet on a legacy mainframe with a thirty-page manual over a modern system that takes decades to learn.
If we want services that scale to meet people’s needs, it’s not just a matter of building new technology. It’s a matter of clearing out the clutter it rests upon. The systems that run our government need to be built on a foundation of bedrock, not landfill.
Paula stayed on at the EDD amid the crisis, working long hours and withstanding enormous criticism, because she saw herself as a steward of a public program. Stewardship is a core value among civil servants, in part because of the frequent turnover in administrations, each of which brings its own priorities and approaches.1 What looks like resistance to change is also a ballast, stabilizing against what amounts to fads rolling through government every time there’s an election.
When systems or organizations don’t work the way you think they should, it is generally not because the people in them are stupid or evil. It is because they are operating according to structures and incentives that aren’t obvious from the outside. I told Paula about our discovery that the EDD’s big hiring spree
wasn’t clearing the backlog of claims and in fact was making it much worse. One might have assumed this would be welcome news.
Given that it was impossible to train new claims processors in time to help, freeing up her seasoned workforce was critical to tackling the backlog. That was not how Paula received the news. Well, that’s nice, she said in effect, but there’s nothing we can do about it. Hiring as fast as they possibly could had been the one consistent directive coming from everyone above them: the governor’s office, the legislature, the federal Department of Labor, and every oversight body with jurisdiction over the EDD’s operations.
Telling all these stakeholders that they were wrong would not relieve the pressure on the department. It would just make the people up the chain look bad, which would further anger them.
The incumbent thesis was that if you wanted to manage something professionally, you structured it like a waterfall—or at least like a cartoon version of a waterfall, with several pools, each flowing into the next. There were separate, sequential stages each project had to go through: gathering requirements, design, implementation, verification, and maintenance. Different teams were generally responsible for each of the stages, and once a given stage was complete you didn’t go back. Many projects that followed the waterfall methodology failed badly,
The manifesto rejected the precepts of waterfall project management. Whereas it’s a major sin in the waterfall model to change requirements at a later phase, agile “welcomes changing requirements, even late in development.” Whereas waterfall specifies a particular point in time when the software will be delivered before it moves on to verification, agile wants programmers to “deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.” In other words, everyone gets to look at the software and try it out while it’s still being coded. Whereas in waterfall, a separate team usually develops the requirements and then moves on to some other project, in agile, “business people and developers must work together daily throughout the project.
“Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.
WATERFALL DEVELOPMENT
Waterfall development is still very much alive and well, particularly in government. But most of the software that has changed the world around us in the past two decades was not made this way. Government’s attachment to waterfall development seriously hinders its ability to build software that works well
Clay Shirky once quipped that “waterfall amounts to a pledge by all parties not to learn anything while doing the actual work.” And that implicit pledge was evident in Paula’s response to finding out that her department’s hiring spree was fatally stalling claims processing. As director of the EDD, Paula may have been at the top of a large department, but she was at the bottom of an enormous waterfall that started all the way in DC with Congress and the executive branch, flowed through the federal Department of Labor, cascaded down to the governor and his team, and on through to the labor secretary of California, to whom Paula reported.
Waterfalls determine how information, insights, agency, and power flow. The flow goes only one way: down.
VBMS is basically a database of veterans’ disability claims files. It has some obvious oddities. When you open it, for instance, there is no visible menu. You have to move your mouse to the edge of the screen to get a very thin line to appear, and when you click on it, it expands to reveal the menu. More importantly, while some parts of VBMS look like what you’d expect to see in a database, with fields like name, address, and other information, much of what’s contained in it is not structured data but rather images of paper forms. It’s a lot more like reading through old microfiches at the library than it is like doing a web search. You have to find the right file to open and read the words on the page yourself, because the computer only knows there’s a file there, it doesn’t know what’s in it. This means two things: one, that the system is handling many large image files, which take up a lot of computer resources and slow things down, and two, that the people using the system need to open and close quite a few of these files to find the information they need to process a veteran’s claim. They essentially need to flip through the claim as if it were a stack of paper on their desk, except that it’s not on their desk. It’s on a computer screen. And nothing will make you long to go back to a pre-computer world more than trying to look through a bunch of pieces of paper, only instead of thumbing through them, you click to the next page and wait. And wait. And wait.
we’d heard that adjudicators could click on a file and then go start the coffee brewing and come back before the next page loaded. Latency’s been solved,” he told us. “As of last month, there are hardly any reports of latency.
Some work had indeed been done to speed up the system, but his office had sent a memo defining latency as a delay of over two minutes. If you clicked on a button or link, waited for a minute and fifty-nine seconds, and the page appeared, you were not to report latency. officially, that wasn’t a problem; Kevin had defined it away.
there were whole categories of questions he would not answer. “That’s a question for the program people,” he said time and again. Finally, a bit confused, I asked why was he so reticent to talk about what the system was designed to do. “I’ve spent my entire career training my team not to have an opinion on business requirements,” he told me. “If they ask us to build a concrete boat, we’ll build a concrete boat.”3 Why? I asked. “Because that way, when it goes wrong, it’s not our fault.
He was proudly abdicating responsibility in order to avoid blame.
The developers cared very much about getting veterans their benefits, and they were also the ones who could see most clearly how policies and procedures put in place with the best of intentions sometimes backfired. They knew where the bottlenecks were and what to do about them—they just needed someone from higher up the chain to listen to them, and we could make that happen.
I had come to DC believing that if the tech teams could have a say in how things worked, we might have fewer failures of delivery. And many techies wanted a say, and wanted to get the job done right. But Kevin was operating under a completely different premise. The last thing he wanted was to have a seat at the table. Keeping his teams in order-taking mode didn’t make them immune from criticism—there were constant headlines about the VA backlogs and ongoing fury from administration officials who wanted veterans taken care of—but it had been a winning strategy for him personally.
Like Paula, he’d been promoted countless times, rewarded by a rule-bound civil service regime that values years of experience and a clean record but has little ability to judge competencies, leadership acumen, or a track record of meaningful results. Part of the challenge of counting the backlog was that not every application that the EDD receives is a valid claim for unemployment benefits. Some claims are begun but then abandoned, sometimes because the claimant has found new work, and some claims do not actually represent an unemployed worker but are attempts to defraud the system,
It was very hard to know which claims fell into which category. When you applied, the EDD had no idea if you were who you claimed to be.
The EDD compared the name, date of birth, and Social Security number you had provided with the corresponding data in a handful of other databases, like the Social Security database. If what you had entered matched exactly with those other databases, the EDD generally assumed you were the person you claimed to be and sent your application on to be processed.
If it didn’t, your claim was pulled off the line so that the department could try to establish your identity with greater confidence. For lots of people with valid claims, the data didn’t match. If your name on your Social Security card was Alejandro, but you’d put down Alex because that was how your previous employer knew you, your claim was flagged.
If your last name was hyphenated, had an apostrophe, or was more than twenty letters long, it was likely to be flagged, because some of the other databases didn’t accommodate those characters.
Our world is awash in databases of stolen identities from breaches at credit monitoring services, retailers, and employers, and these stolen identities are freely traded on the dark web. Fraudulent applications using these sources will not get flagged: the data entered on the application will exactly match the sources the EDD checks against, because it is usually a copy of precisely that data.
The practice of flagging claims for identity verification based on mismatched names and Social Security numbers sounds archaic, but it was standard operating procedure across state unemployment insurance systems. It also turned out to be enabling massive organized fraud across almost all fifty states. And it had another effect as well. Because flagged applications must be manually processed and manual processing was the bottleneck in the EDD system, this practice was the main reason for the backlog.
The only way to stop the growth of the backlog was to send fewer claims into the manual process. The way to do that was to verify claimants’ identities in some way that didn’t rely on matching against Social Security and similar databases. The good news was that there were a variety of commercial offerings that would do just that. Our first recommendation as a task force was therefore very easy to make: procure and install an identity verification system as soon as humanly possible.
The California Department of Technology got to work immediately, and it worked at lightning speed by government standards.
The service they chose asked users to take a picture of their physical ID—driver’s license or passport, for example—with their phone, and then to take a selfie. Using automated facial recognition, the system then compared the selfie with the uploaded document and also with photos in other available databases. (There was a range of protections built in to prevent people from using a photo of someone else instead of a selfie.) It wasn’t perfect by any means: it didn’t help people without smartphones, for instance, and we would find out later that the app didn’t work for some people, especially people with poor internet connections, and they ended up in much the same limbo as before.
The backlog would more than double. That number could be reduced if the EDD loosened the criteria that were flagging claims for manual processing, but Paula wouldn’t hear of relaxing what she thought of as fraud prevention practices. Those practices were not, in fact, preventing fraud.
Out of 183,167 claims, only 804 were judged to be imposters.
On the other hand, the claims filed in bulk by organized crime, with their perfectly matched data, were sailing through the system.
Waterfall culture is by no means the sole domain of government, and not all government organizations are run as waterfalls. Even the ultimate hierarchy, the military, can empower its workforce in an agile manner under the right conditions.
We as the task force were not bound by the waterfall pledge. So when Paula refused to loosen the flagging criteria, we upped the stakes. If the department wouldn’t take steps to keep the backlog from doubling while the new identity verification service was coming online, we told her, then it should stop taking applications altogether.
SHOCKING AS THE idea sounded, we were completely serious. Claimants trying to apply at that time had a 40% chance of going into the black hole of manual processing. If they waited just a few weeks for the new system, then as long as they could prove their identity through the new verification setup they would almost certainly go through the assembly line instead and start getting their unemployment benefits in a reasonable time frame.
After two weeks, the EDD opened its virtual doors to new applicants again. The new identity verification system was now in place, and it kept all but a handful of claims from going into the backlog. It didn’t stop all fraud, but the organized crime rings with their databases of stolen identities could no longer operate at the astounding scale they had been, and the state stopped bleeding billions of dollars.
It wasn’t newer and sexier tools to replace legacy technology, since the infamous COBOL code chugged along just the same the whole time. It was permission—permission to disrupt the waterfall.
We need a fundamentally different way of delivering on the promise of policy. We need to retire the waterfall.
If we want to do away with the waterfall, we have to learn how to recognize it. It can be hard to notice because it most often shows up not in what people say but in what they don’t say, like when Paula stayed mum about the impact of the hiring spree because she knew no one wanted to hear it. It can also be hard to notice because each level of the waterfall can usually see only what happens just below and just above.
Rarely does someone take a step back and trace a decision made at the highest levels of government all the way down to its impact on teams at ground level.
Consular One
It wouldn’t be an outlier in federal government either. The State Department’s Bureau of Consular Affairs has been trying since 2009 to modernize and consolidate the systems that handle visa issuance, passport renewal, recording births abroad, and other services. It first projected a 2016 launch date, but as of late 2021, the bureau still had not expanded that pilot and the other components still had not launched. A report by the Office of Inspector General’s best estimate is that, as of mid-June 2021, the cost for ConsularOne ranged between $200 million … and $600 million.”
IRS systems
The ever-slipping timelines are pretty standard. In 2000, the IRS announced a plan to replace one of its core systems, called the Individual Master File (IMF). Then in 2019, the agency’s CIO stated that the project was too broad and complex and that the focus would shift to updating only some parts of the system. In 2021, the IRS gave a new date for retiring—or perhaps mostly retiring—the IMF: January 2030.
much of the IMF is still written in assembly language, a finicky and labor-intensive way of programming computers specific to the hardware it runs on. You can’t just hire a programmer off the street to work on the system. Not only is assembly language an increasingly rare specialty, but the IMF’s particular kind of assembly language, and the complex business logic of its core processing programs, can take years to learn.
Contrary to popular belief, the public servants responsible for the interminably drawn-out modernization efforts are neither lazy, stupid, nor malicious. I’ve met hundreds of them, and they are overwhelmingly dedicated, conscientious, and often quite creative. IRS employees managed to send monthly child tax-credit payments to nearly 40 million families and to mail out over $800 billion in stimulus checks during the pandemic, all while relying on systems that were never designed to change so quickly or handle such enormous volume.
One reason the IRS has so much trouble modernizing its systems, for instance, is that there are over 73,000 pages in the statutes and regulations the agency must implement. Between 2001 and 2012, the tax code changed 4,680 times, an average of once per day.
Department of Defense
Satellites have a life expectancy, and by the early 2000s many of the satellites GPS relied on were nearing the end of theirs. The need to replace them offered a chance to update their software and improve the accuracy and availability of GPS navigation signals. In 2010, the US Air Force awarded the defense contractor Raytheon a $1.5 billion contract to develop the Next Generation GPS Operational Control System, known as OCX.
OCX was falling more and more behind schedule, and the top brass at the air force were frustrated. According to the original plan, the work was supposed to have been completed already. Defense Department officials had already revised their project budget from $1.5 billion to $3.7 billion and would soon revise it again to $5.5 billion.2 (Since then, it has been revised once more, to $6.2 billion and a due date in 2023.3) The huge project was getting huger.
Weaver, though, was busy looking at something very small, called multicast user datagram protocol, or UDP.4 He was focused on UDP because it was the obvious way to get data from the satellite monitoring stations to the master control station run by the Department of Defense. UDP is one of the low-level protocols that make the internet work, and it’s built into almost every operating system in the world.
But OCX wasn’t using UDP on its own. The satellites generated UDP messages, but the control station didn’t receive them directly. Instead, the contractor had written a piece of software to receive a UDP message, read the data, decrypt it, and then recode it into a different format, called XML, re-encrypting it to XML security standards. The software then used another network protocol, called SOAP (which is largely obsolete in the commercial world), to put this XML message onto something called an Enterprise Service Bus, or ESB. The ESB eventually delivered the XML message to yet another piece of software, at which point the process ran in reverse: the software used SOAP to retrieve the XML message from the ESB, decrypted and decoded it, copied it into memory, and parsed out the original data just as it had been initially delivered via UDP. Only then did the control station get to see what the satellites had sent.
Because the data was taking such a roundabout route from the satellites to the control station on the ground, it wasn’t arriving quickly enough for the station to make the calculations needed. Recoding the data into different formats also meant the computers running the programs needed more memory and processing power. And the complicated arrangement was making the whole thing not only incredibly inefficient but also hard to debug. SOAP and XML messages are quite complex, while an ESB is an entire software suite in itself and comes in lots of incompatible versions.
Using UDP alone would have made the entire job a snap—as easy as nailing a couple of boards together—but the Raytheon team told Weaver that the data absolutely had to go through an ESB.
What was going on took some digging.
“DoD Architecture Framework.” The RFP required that the OCX project be developed in accordance with this framework.
Every company that bid on the project knew what the DoD Architecture Framework was. According to the department’s website, “it is the overarching, comprehensive framework and conceptual model enabling the development of architectures to facilitate the ability of Department of Defense managers at all levels to make key decisions more effectively through organized information sharing across the Department, Joint Capability Areas, Mission, Component, and Program boundaries.” In other words, if you want to build something for us, it has to be built this way, because we think that if everything is built the same way we can coordinate better across a large institution.
The DoD Architecture Framework comes in two volumes, and it is the second one, weighing in at 284 pages, that addresses the subject of data transfer. The framework emphasizes something it calls “net-centricity”: “The DoD is committed to making operations net-centric; that is, enabling the ability to share information when it is needed, where it is needed, and with those who need it. But keep reading, and the DoD Architecture Framework moves from general guidance to more specific instructions.
Somehow, its authors have jumped from the very reasonable notions of broad value and open technology standards to specifying an Enterprise Service Bus. The bizarre setup Weaver was wrestling with was not a brilliant way for Raytheon to shake down the federal government for more money. It was a requirement set by the air force that the contractor was obliged to meet. It was friendly fire.
By any reasonable definition, the form didn’t work. But the way we build government technology is to specify the requirements and fulfill the requirements. That had happened. There had been no requirement to test the software outside the building and no requirement that the software actually work.
The more trouble a project is in, the more oversight it gets, and more oversight almost always means stricter compliance with requirements, not finding creative ways around. No one could ditch the ESB. In the meantime, Weaver still wanted to know where the ESB requirement had come from in the first place,
What he found was another entire Rube Goldberg machine, this time of policy rather than technology. Raytheon required its developers to use an ESB because the air force required it of them, and the air force in turn required it because the Department of Defense required it of the developers. But why did the DoD require it?
It turns out that the DoD Architecture Framework, the source of the requirement in the air force’s project, is itself derived from a higher-level document: the Federal Enterprise Architecture.7 The FEA is the work of a body called the Chief Information Officers Council, a group of more than thirty CIOs of federal agencies, departments, administrations, and branches of the military.
And why did the council publish the FEA? In 1996, Congress passed the Information Technology Management Reform Act, part of what became known as the Clinger Cohen Act. Among other provisions, the law required each federal agency to have an “Information Technology Architecture.” There was a sense that the technical architectures of the various agencies should be coordinated in some way, and the thirty-odd members of the CIO Council got the job.
The CIO Council had wanted to be very precise. Raytheon wanted to be very thorough. Everyone in between wanted to ensure full compliance with the directives they’d been issued. No one was trying to sabotage the project, and there was nothing like gross negligence in sight. In fact, if anything, the OCX team was trying to do its job too diligently.
Everyone was operating just as they were expected to in the waterfall system, from the lawmakers in Congress to the engineers in a room in Aurora, Colorado.
Weaver could only conclude that these best of intentions had “doomed federal IT projects to massive, predictable failures for at least twenty years.
The developers never did get around the ESB requirement. Weaver and his US Digital Service colleagues taught them faster ways to test and deploy software, which turned out to be quite helpful, but they never figured out how to make the Rube Goldberg machine work. Eventually, time ran out for the satellites that were wearing down in space.
The air force had to launch new satellites whether the new software was ready or not, so it went ahead without some of the software updates. The new satellites have all the next-generation sensors and other kinds of updated hardware, but without the software to go with them, GPS users aren’t getting the improved resolution and other benefits that these hardware upgrades were supposed to bring. GPS still works, but the US government has spent many billions of dollars for satellites that in many regards do the same things the old ones did. And all because of a requirement that cascaded down the waterfall all the way from Congress in 1999.
neither the Clinger Cohen Act nor any other law explicitly required an ESB. Nowhere in the Federal Enterprise Architecture does it say “thou shalt always use an enterprise service bus.” There are five mentions of “enterprise service bus” in the document, but all of them are in charts or diagrams listing various application components that could support interoperability. ESBs became mandatory in practice within the Department of Defense through overzealous interpretations of law, policy, and guidance, combined with lack of technical understanding.
The accountability trap is a damned-if-you-do, damned-if-you-don’t situation. The first system is extremely uncomfortable for the public servants subjected to it. No one wants to be called up to testify in a televised hearing. No one wants to be in the video clip as a stone-faced bureaucrat with no good answers, being yelled at by a righteous—or self-righteous—politician fighting the good fight on behalf of the aggrieved public. In front of the cameras, you can’t say things like “it doesn’t work because we were forced to use an ESB.” You would look like you were trying to throw someone else under the bus, and the legislators wouldn’t understand what you were talking about anyway. Your job is simply to endure the hearing, produce as few viral sound bites as possible, and not incriminate others.
As painful and sometimes humiliating as these hearings are, if you’re a career civil servant, it is the second system of accountability that matters most to you.
Legislators can’t fire or officially reprimand you, no matter how bad a job they think you did
On the other hand, violations of policy, process, and procedure—real or perceived—can do all of that, even if there is no hearing.
Even the most competent tech team can hit resistance when trying to explain, for instance, why there should not be an ESB in the software they are building. A team could point out that it was only suggested, not required. To nix the ESB, dozens of people in dozens of different roles would all have had to agree to jeopardize their jobs over a recommendation they didn’t understand.
This drama plays out repeatedly in the area of cybersecurity. The Federal Information Security Management Act, or FISMA, provides a menu of some three hundred distinct “controls” that tech teams can choose from to secure government software and data from hackers. Competent developers should, in theory, create an informed, thoughtful security plan that chooses the controls most relevant to the circumstances and focus their efforts on implementing and testing those choices. But it’s the rare compliance officer who will take the risk of allowing anything less than all three hundred.
So even if you have a skilled security team, they’ll have to march through a massive checklist, much of it meaningless for their project, instead of focusing on the specific controls they believe will actually secure their system.
Government’s obsession with requirements—voluminous, detailed requirements that can take so long to compile the software is obsolete before it’s even bid out—stems from a delusion that it’s possible to make a work plan so specific that it requires no further decision-making.
You hand it off and the developers just do exactly what they’re told. The goal in government seems to be to drain the job of software development of any opportunity to exercise judgment.
So why do we have so little tech expertise in government? And why do we treat the experts we do have with so little regard? It wasn’t always that way. In the early days of computers, the US government dominated the nascent industry. By the mid-1960s, the federal government purchased over 62% of the output of the entire US computer industry.
1In 1994, a democratically controlled Congress also passed the Federal Workforce Restructuring Act, which required the executive branch to get rid of 273,000 jobs. The following year, as those job cuts were ongoing, Republicans gained control of the House for the first time in forty years and proceeded to give the legislative branch a shearing to match the one the executive branch had just gotten. Congress’s workforce—lawyers, economists, and investigators who worked on congressional committees, as well as auditors, analysts, and subject-matter experts in offices like the Congressional Research Service—was cut by a third. The Office of Technology Assessment, which focused on how to respond to technological advances in society, got the axe entirely.
the cuts were intended to prove that the Republicans could walk the talk of smaller government, modeling the behavior they hoped to see more broadly.
Technology was not central to these outsourcing debates, but the result was a dramatic loss in the core capacity of government at perhaps just the wrong time.
In 1966, the year of the A-76 memo, Russia landed Luna 9 on the moon and Lawrence G. Roberts published a paper about connecting computers to one another over dial-up, which would lead to ARPANET, the predecessor of the internet. By 1994, when the Federal Workforce Restructuring Act passed, we had the first commercial search engines, the first online e-commerce, and the first streaming radio station. The world was hurtling into a digital future—and investing heavily in it—while government was busy handing out pink slips.
Since then, with a few exceptions, government has thought of digital technology much like the pens and paper clips that GSA buys for government offices: something government would be crazy to produce for itself. And indeed, the hardware and software to handle common tasks were increasingly available for purchase, and it was crazy when government tried to build custom software to meet commodity needs.
When the team wrapped up its edits and published the official National Broadband Plan document, it was only the beginning of the work. No one really knew what kind of internet services were available to consumers and at what cost. Policymakers didn’t know what parts of the country even had the opportunity to get broadband, so they didn’t know who was left the farthest behind. That’s what Mike had been brought in to figure out: what access was available to every census block in America. He was going to need to take in 20 to 30 million rows of data a day from 1500 different providers of broadband data across fifty-six states and territories and display them in one very large and complex map. What this next job needed was not the writers and analysts who’d been holed up in that trash-strewn room but programmers, designers, and data wranglers. The problem was that Mike didn’t have any.
To get them, he would need to write contracts. “That’s how this works,” he told me. “All of the staff—the core civil servants—they manage, but they don’t implement. One hundred percent of the implementation is contractors. Contracts take time. There were some very competent (and friendlier) contractors who’d been working on other jobs when Mike arrived, with whom he developed a great rapport, but they would need to compete alongside every other vendor who chose to bid in an open competition.
Most government procurement rules allow for any of the vendors who aren’t selected for a given contract to file an official protest, which triggers a formal and often lengthy review of the process that led up to that selection.
Mike couldn’t afford delays like this. The money from the NTIA was what’s called “one-year money.” The way it was appropriated by Congress, that is, it had to be spent in the fiscal year it was granted, which would end on September 30. If it wasn’t formally awarded to vendors by then, the money would disappear in a cloud of administrative smoke. And it was now late August. Against all odds, they shipped the National Broadband Map on February 17, 2011, the day it was due to Congress.
Eastman Kodak struck a deal with IBM to outsource Kodak’s IT department at a scale that other corporations had never considered. Hundreds of Kodak staffers suddenly became employees of IBM, essentially doing the same jobs they’d been doing but now for another company.
A trend started in the corporate world, with CIOs bragging about the size of their outsourcing contracts.
Perhaps it was not a wise model to follow. In 2012, Kodak filed for bankruptcy. There were many reasons for its decline besides its decision to outsource IT, of course, but it’s fair to ask how Kodak’s future might have been different if the company had been less extreme about outsourcing. “It’s conceivable that IT’s views could have saved the company, had the culture been different and had executive management been willing to listen,” writes Robert Plant in the Harvard Business Review. “Imagine if someone had asked, ‘When we process a negative, why don’t we capture images as digital files, store them in our system, and allow them to be viewed via a third-party service?
The more trouble a project is in, the more oversight it gets, and more oversight almost always means stricter compliance with requirements, not finding creative ways around
No one could ditch the ESB. In the meantime, Weaver still wanted to know where the ESB requirement had come from in the first place. What he found was another entire Rube Goldberg machine, this time of policy rather than technology. Raytheon required its developers to use an ESB because the air force required it of them, and the air force in turn required it because the Department of Defense required it of the developers. But why did the DoD require it?
It turns out that the DoD Architecture Framework, the source of the requirement in the air force’s project, is itself derived from a higher-level document: the Federal Enterprise Architecture.7 The FEA is the work of a body called the Chief Information Officers Council, a group of more than thirty CIOs of federal agencies, departments, administrations, and branches of the military.
And why did the council publish the FEA? In 1996, Congress passed the Information Technology Management Reform Act, part of what became known as the Clinger Cohen Act. Among other provisions, the law required each federal agency to have an “Information Technology Architecture.” There was a sense that the technical architectures of the various agencies should be coordinated in some way, and the thirty-odd members of the CIO Council got the job.
The CIO Council had wanted to be very precise. Raytheon wanted to be very thorough. Everyone in between wanted to ensure full compliance with the directives they’d been issued. No one was trying to sabotage the project, and there was nothing like gross negligence in sight. In fact, if anything, the OCX team was trying to do its job too diligently.
Everyone was operating just as they were expected to in the waterfall system, from the lawmakers in Congress to the engineers in a room in Aurora, Colorado.
Weaver could only conclude that these best of intentions had “doomed federal IT projects to massive, predictable failures for at least twenty years.
The developers never did get around the ESB requirement. Weaver and his US Digital Service colleagues taught them faster ways to test and deploy software, which turned out to be quite helpful, but they never figured out how to make the Rube Goldberg machine work. Eventually, time ran out for the satellites that were wearing down in space.
The air force had to launch new satellites whether the new software was ready or not, so it went ahead without some of the software updates. The new satellites have all the next-generation sensors and other kinds of updated hardware, but without the software to go with them, GPS users aren’t getting the improved resolution and other benefits that these hardware upgrades were supposed to bring. GPS still works, but the US government has spent many billions of dollars for satellites that in many regards do the same things the old ones did. And all because of a requirement that cascaded down the waterfall all the way from Congress in 1999.
Neither the Clinger Cohen Act nor any other law explicitly required an ESB. Nowhere in the Federal Enterprise Architecture does it say “thou shalt always use an enterprise service bus.” There are five mentions of “enterprise service bus” in the document, but all of them are in charts or diagrams listing various application components that could support interoperability. ESBs became mandatory in practice within the Department of Defense through overzealous interpretations of law, policy, and guidance, combined with lack of technical understanding.