Joe Sutter, 747: The story of Sutter’s career, mostly of producing the 747 for Boeing. He doesn’t give much detail on bureaucratic infighting but it seems to have been a real pain for him, as he just wanted to supervise the engineering. If you ever need examples of how private industry is not actually “efficient,” he has some good ones about waste caused by fiefdoms and management fads at Boeing.
Jennifer Pahlka, Recoding America: A book by a technocrat convinced that America’s ways of lawmaking could be greatly improved by borrowing from agile development, which allows people lower in the hierarchy to make consequential decisions rather than being burdened by “waterfall” development, where all the rules have to be specified in advance, leading to deadly (sometimes literally) complexity and policy failure. Give people leeway to implement the intent of the overall policy, she argues, and you can avoid the layers of bureaucracy that stymie well-intentioned attempts at reform. While there’s merit in the argument, she doesn’t give a lot of weight to the reasons that policymakers try to be comprehensive—although the perfect shouldn’t be the enemy of the good, it’s also the case that if you get a policy running that works for 90% of people, the 10% excluded are likely to share some demographic characteristics, and the policy is unlikely to be revisited to fix it for them, which is probably worse when it comes from government than when it comes in private software. I imagine she’d respond that oversight that focuses on implementation success, and flexibility to keep working for that 10%, are the proper solutions.
Still, she identifies important dynamics that can defeat progressive policies, like lack of comprehensive recordkeeping plus relying on individual action to implement expungement of felony records for marijuana offenses in California, or nine different definitions of a “group” of doctors for Medicare purposes. Throwing money at the problem rarely helps (though she doesn’t give much shrift to “lift the restrictions and just do something in blanket fashion, whether that’s sending money to people with kids or implementing universal health care” either). Neither does outsourcing and oversight, both of which can help when properly deployed but often end up with more layers of bureaucracy.
Instead, she argues, teams building tech to implement a policy should have the authority to alter it as they go. They should build something that works at least a bit as soon as possible, shift edge cases to human review, and automate the easy stuff, rather than building software that’s supposed to accommodate all possible situations. This is impossible in many government spaces because following policy is relatively safe; even if you’re yelled at by overseers for policy backlogs or implementation failures, they won’t accuse you of violating the law that requires you to serve everyone.
And the worst part of directives from above, from her view, is that “nowhere in government documents will you find a requirement that the service actually works for the people who are supposed to use it. The systems are designed instead to meet the needs of the bureaucracies that create them—they are risk-mitigation strategies for dozens of internal stakeholders,” even though they fail at that regularly.
This is a special problem for government because people interpret their experiences with bureaucracies as evidence of how government works more generally—involvement with the criminal system, or getting a construction permit, or filing taxes, can be unpleasant enough that it erodes faith in government and deters political participation.
Unfortunately, the problem is also worse in government because obsolete tech is paired with obsolete policies—not just obsolete, but accreted over rounds and layers of attempted reform. One thing Palka, who’s not a lawyer, doesn’t suggest is that new laws should explicitly allow regulators to simplify and even eliminate earlier categories and rules—and there are certainly reasons why we don’t do that, because lots of systems rely on past categories and rules, but accretion keeps making things worse. “Lawmakers were furious at state-level bureaucrats for their failures during the pandemic, but it’s the lawmakers who have insisted on petty provisions like docking a claimant’s benefits because the person had a cold one day.” This example comes from automated systems that, pre-pandemic, docked unemployment payments when the claimant didn’t look for work when they were sick. That rule was supposedly waived, but automated systems couldn’t handle the waiver. (This is an example where it’s a dumb rule in the first place, of course, and more universal safety nets would have had lots less waste and failure.)
In software, agile development allows you to learn from data. But waterfall development uses new data only to grade after the fact. “For people stuck in waterfall frameworks, data is not a tool in their hands. It’s something other people use as a stick to beat them with.” So they naturally aren’t that interested in collecting it.
Another problem: “Even when legislators and policymakers try to give implementers the flexibility to exercise judgment, the words they write take on an entirely different meaning, and have an entirely different effect, as they descend through the hierarchy, becoming more rigid with every step.” She gives numerous examples—again, without too much attention to why that happens, often in an attempt to allow different providers to compete fairly for contracts. (And even if we did less outsourcing, that would help, but we still probably want things like rivets to come from the private sector, so the dynamic would still exist.)
One thing that might be fixable is policymakers’ cultural contempt of implementation. They think/hope/expect/imagine that if they write the right rules, everything will be fine, but it isn’t and won’t be.
Pahlka criticizes the Administrative Procedure Act rulemaking process that most of government uses, because it essentially invites and requires interest group lobbying for every rule and the required process is more like a jury trial than an expert evaluation. Leftists, she argues, got really good at suing the government to stop bad stuff, but that contributed to an environment of risk aversion (and didn’t stop the Supreme Court from harming agency power anyway).
At points she’s pretty clear that there are some no-win scenarios here: Equity usually requires data, which requires paperwork, which favors the powerful. Europe's GDPR privacy regime, for example, caused big tech profits to drop 4.6%, but small tech companies lost over 12%.
So, what should we do? Focus on making things simpler for most people and devote human resources to the tougher situations. New programs should be launched when they’re ready to handle 85% of the cases, though the edge cases should be addressed technologically eventually. (In reality, she notes, policies are launched incrementally anyway, because the systems built under current processes don’t work for a lot of people, but they’re incremental in the worst possible way.) “We can’t fix this until we understand that in government, we’re not starting a new relationship, we’re repairing a deeply broken one.” As one person says of welfare applications, “Every time you add a question to a form, I want you to imagine the user filling it out with one hand while using the other to break up a brawl between toddlers.” Documentation should be required only when needed and responsive to actual circumstances.
Even though it sounds scary and even undemocratic to have a random technologist deep in the hierarchy making important distinctions, she argues, it’s necessary for success of the actual intended outcomes decided upon by elected representatives. This is because, when no one can go ahead and make decisions about how a program should work but lots of people have the power to add requirements to it—as is now the case—you get lots of paperwork and few good outcomes. Good product management can “reimagine representation and voice so as to honor the values our government is supposed to be founded on.”
In addition, attentive to implementation, policymakers shouldn’t enact “policy options that are very hard to automate, like decriminalizing burglaries of property under $950.” If they do that, they have to understand that the program will simply fail unless they also provide time and money for people to go through all the individual files in the relevant jurisdictions.
Here’s a success case that goes through initial problems: when the Biden administration sent out all those covid tests, the postal service just asked for your address. “Each apartment was supposed to act as a unique address, but in a very small fraction of cases, one apartment dweller requesting tests would blacklist other units in the same building.” Turns out, that wasn’t a programming error. “It was that mail carriers had been compensating for incomplete data for decades.” The agile process worked, she thinks: “the team quickly added a little note asking anyone who thought this was an error to fill out a short form, and customer service teams reached out to clarify. It increased the burden on this small number of users, and unfortunately these users were disproportionately of lower income.” But the postal service also updated its database in response, cleaning up about two-thirds of the residential address database as a result.
To further improve things, she argues that the government should spend on improving its human resources, especially program management/operating expenses. And oversight should ask less about whether a team stuck to a plan, but what the team learned in implementation and what user tests are showing now.
Abhijit V. Banerjee & Esther Duflo, Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty: Interesting summary of what we know (and don’t) about how poor people make decisions. Poor people are people! So they behave like people under resource constraints; when it comes to healthcare, that also includes information constraints (not really knowing much about vaccination, for example, including often lacking trustworthy sources of information). Some self-protective measures can also limit the upside of taking risks that might pay off—like people who spend money as it comes in so that they don’t get pressured to give it to needy family and friends. The research also suggests that microcredit has a limit—most businesses that poor people work in inherently don’t scale well, so expecting entrepreneurship to save poor people is a mistake. Given that poor people have to take way too many risks, it’s understandable that their ambitions for their children often are stable (ideally government) employment rather than entrepreneurship.
Jennifer Pahlka, Recoding America: A book by a technocrat convinced that America’s ways of lawmaking could be greatly improved by borrowing from agile development, which allows people lower in the hierarchy to make consequential decisions rather than being burdened by “waterfall” development, where all the rules have to be specified in advance, leading to deadly (sometimes literally) complexity and policy failure. Give people leeway to implement the intent of the overall policy, she argues, and you can avoid the layers of bureaucracy that stymie well-intentioned attempts at reform. While there’s merit in the argument, she doesn’t give a lot of weight to the reasons that policymakers try to be comprehensive—although the perfect shouldn’t be the enemy of the good, it’s also the case that if you get a policy running that works for 90% of people, the 10% excluded are likely to share some demographic characteristics, and the policy is unlikely to be revisited to fix it for them, which is probably worse when it comes from government than when it comes in private software. I imagine she’d respond that oversight that focuses on implementation success, and flexibility to keep working for that 10%, are the proper solutions.
Still, she identifies important dynamics that can defeat progressive policies, like lack of comprehensive recordkeeping plus relying on individual action to implement expungement of felony records for marijuana offenses in California, or nine different definitions of a “group” of doctors for Medicare purposes. Throwing money at the problem rarely helps (though she doesn’t give much shrift to “lift the restrictions and just do something in blanket fashion, whether that’s sending money to people with kids or implementing universal health care” either). Neither does outsourcing and oversight, both of which can help when properly deployed but often end up with more layers of bureaucracy.
Instead, she argues, teams building tech to implement a policy should have the authority to alter it as they go. They should build something that works at least a bit as soon as possible, shift edge cases to human review, and automate the easy stuff, rather than building software that’s supposed to accommodate all possible situations. This is impossible in many government spaces because following policy is relatively safe; even if you’re yelled at by overseers for policy backlogs or implementation failures, they won’t accuse you of violating the law that requires you to serve everyone.
And the worst part of directives from above, from her view, is that “nowhere in government documents will you find a requirement that the service actually works for the people who are supposed to use it. The systems are designed instead to meet the needs of the bureaucracies that create them—they are risk-mitigation strategies for dozens of internal stakeholders,” even though they fail at that regularly.
This is a special problem for government because people interpret their experiences with bureaucracies as evidence of how government works more generally—involvement with the criminal system, or getting a construction permit, or filing taxes, can be unpleasant enough that it erodes faith in government and deters political participation.
Unfortunately, the problem is also worse in government because obsolete tech is paired with obsolete policies—not just obsolete, but accreted over rounds and layers of attempted reform. One thing Palka, who’s not a lawyer, doesn’t suggest is that new laws should explicitly allow regulators to simplify and even eliminate earlier categories and rules—and there are certainly reasons why we don’t do that, because lots of systems rely on past categories and rules, but accretion keeps making things worse. “Lawmakers were furious at state-level bureaucrats for their failures during the pandemic, but it’s the lawmakers who have insisted on petty provisions like docking a claimant’s benefits because the person had a cold one day.” This example comes from automated systems that, pre-pandemic, docked unemployment payments when the claimant didn’t look for work when they were sick. That rule was supposedly waived, but automated systems couldn’t handle the waiver. (This is an example where it’s a dumb rule in the first place, of course, and more universal safety nets would have had lots less waste and failure.)
In software, agile development allows you to learn from data. But waterfall development uses new data only to grade after the fact. “For people stuck in waterfall frameworks, data is not a tool in their hands. It’s something other people use as a stick to beat them with.” So they naturally aren’t that interested in collecting it.
Another problem: “Even when legislators and policymakers try to give implementers the flexibility to exercise judgment, the words they write take on an entirely different meaning, and have an entirely different effect, as they descend through the hierarchy, becoming more rigid with every step.” She gives numerous examples—again, without too much attention to why that happens, often in an attempt to allow different providers to compete fairly for contracts. (And even if we did less outsourcing, that would help, but we still probably want things like rivets to come from the private sector, so the dynamic would still exist.)
One thing that might be fixable is policymakers’ cultural contempt of implementation. They think/hope/expect/imagine that if they write the right rules, everything will be fine, but it isn’t and won’t be.
Pahlka criticizes the Administrative Procedure Act rulemaking process that most of government uses, because it essentially invites and requires interest group lobbying for every rule and the required process is more like a jury trial than an expert evaluation. Leftists, she argues, got really good at suing the government to stop bad stuff, but that contributed to an environment of risk aversion (and didn’t stop the Supreme Court from harming agency power anyway).
At points she’s pretty clear that there are some no-win scenarios here: Equity usually requires data, which requires paperwork, which favors the powerful. Europe's GDPR privacy regime, for example, caused big tech profits to drop 4.6%, but small tech companies lost over 12%.
So, what should we do? Focus on making things simpler for most people and devote human resources to the tougher situations. New programs should be launched when they’re ready to handle 85% of the cases, though the edge cases should be addressed technologically eventually. (In reality, she notes, policies are launched incrementally anyway, because the systems built under current processes don’t work for a lot of people, but they’re incremental in the worst possible way.) “We can’t fix this until we understand that in government, we’re not starting a new relationship, we’re repairing a deeply broken one.” As one person says of welfare applications, “Every time you add a question to a form, I want you to imagine the user filling it out with one hand while using the other to break up a brawl between toddlers.” Documentation should be required only when needed and responsive to actual circumstances.
Even though it sounds scary and even undemocratic to have a random technologist deep in the hierarchy making important distinctions, she argues, it’s necessary for success of the actual intended outcomes decided upon by elected representatives. This is because, when no one can go ahead and make decisions about how a program should work but lots of people have the power to add requirements to it—as is now the case—you get lots of paperwork and few good outcomes. Good product management can “reimagine representation and voice so as to honor the values our government is supposed to be founded on.”
In addition, attentive to implementation, policymakers shouldn’t enact “policy options that are very hard to automate, like decriminalizing burglaries of property under $950.” If they do that, they have to understand that the program will simply fail unless they also provide time and money for people to go through all the individual files in the relevant jurisdictions.
Here’s a success case that goes through initial problems: when the Biden administration sent out all those covid tests, the postal service just asked for your address. “Each apartment was supposed to act as a unique address, but in a very small fraction of cases, one apartment dweller requesting tests would blacklist other units in the same building.” Turns out, that wasn’t a programming error. “It was that mail carriers had been compensating for incomplete data for decades.” The agile process worked, she thinks: “the team quickly added a little note asking anyone who thought this was an error to fill out a short form, and customer service teams reached out to clarify. It increased the burden on this small number of users, and unfortunately these users were disproportionately of lower income.” But the postal service also updated its database in response, cleaning up about two-thirds of the residential address database as a result.
To further improve things, she argues that the government should spend on improving its human resources, especially program management/operating expenses. And oversight should ask less about whether a team stuck to a plan, but what the team learned in implementation and what user tests are showing now.
Abhijit V. Banerjee & Esther Duflo, Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty: Interesting summary of what we know (and don’t) about how poor people make decisions. Poor people are people! So they behave like people under resource constraints; when it comes to healthcare, that also includes information constraints (not really knowing much about vaccination, for example, including often lacking trustworthy sources of information). Some self-protective measures can also limit the upside of taking risks that might pay off—like people who spend money as it comes in so that they don’t get pressured to give it to needy family and friends. The research also suggests that microcredit has a limit—most businesses that poor people work in inherently don’t scale well, so expecting entrepreneurship to save poor people is a mistake. Given that poor people have to take way too many risks, it’s understandable that their ambitions for their children often are stable (ideally government) employment rather than entrepreneurship.