Authors:
Jonathan Bean
A lot of work is, at best, drudgery. In my world, I can't say I look forward to grading multiple-choice tests or keeping up to date with the latest accounting and approval procedures. These are repetitive but necessary tasks that AI promises to do for us accurately and efficiently. Liberating people from this kind of labor has long been part of a utopian future, not to mention daydreams. (I, for one, welcome our new AI overlords.)
Yet some degree of tedium is a required part of work for the hardy souls who make our government and institutions function. When that work gets serious—say, when information requires secrecy but also needs to be shared—determining classification levels can have huge consequences. Make a document too secret and it may not be seen by people who need the information. If it's not made secret enough, the dispersion of knowledge may endanger others or create new threats [1]. Governments are fortunate to have brilliant people working on AI tools for complex tasks such as classification, and it's likely that some of the technologies they develop will make their way into other realms of the bureaucratic-administrative complex. I am concerned, however, that the proliferation of AI helpers will add layers of complexity rather than reduce them. It's easy to imagine a negative feedback loop that would lock in AI: Things could become so complicated that they are indecipherable without artificial assistance.
For example, one area of bureaucratic drudgery is interpreting the building code, which has myriad benefits, including making buildings safer in foreseeable events such as fires and earthquakes. The code even has the potential to reduce (or reverse, if we're really optimistic) negative impacts on Earth's ecosystems. "The" is a sleight of hand. It would be more accurate to write "building codes," as one of the valid reasons that building codes are universally vexing is that they are not universal. Most jurisdictions in the U.S. have adopted a version of code written by the International Code Council, while governments in Europe and Asia use different standards. In the U.S., a new version is typically released every three years, at which point it can be adopted—with or without amendments—by local jurisdictions or states. The result of the ongoing process of revision, modification, and amendment is a patchwork of rules laced with cross-references to appendixes, tables, and other standards. Your state may have adopted the latest code but left out the latest requirements for energy conservation. In fast-growing parts of Arizona, which follows a bottom-up philosophy of government called home rule, different suburbs of Phoenix use different versions of the building code. This creates a distorted market where it's cheaper to build or remodel in some jurisdictions, because they don't, for example, require as much insulation. It also creates a real headache for people who are part of the process of making a new building or remodeling an existing one. Missing the mark for architects and builders who want to ensure that what they design and build is up to code can mean costly revisions to drawings. Sometimes they might even need to rip out and redo completed work. For plans examiners, people whose job is to review submitted plans for code compliance, checking plan sets repeatedly, which can have hundreds of pages, must feel like a Sisyphean task.
I am concerned that the proliferation of AI helpers will add layers of complexity rather than reduce them.
I suspect being given the task of examining plans could turn any AI skeptic into a booster. To start with, despite efforts to standardize a digital file format for building plans, plans are often submitted as paper documents in the U.S. Simply looking at a set of plans is cumbersome because of their size. Even if they are viewed in a digital format on a large, high-resolution monitor, the density of information on a 36-by-24-inch sheet, one of the smallest standard sizes, requires a lot of scrolling. Understanding what's going on also requires cross-referencing multiple drawings typically spread across separate sheets. This is true even for small jobs. For example, if you are remodeling your kitchen and removing a beam, the plans examiner will likely take one trip from the floor plan to at least one and likely two separate drawings, each on their own sheet, with structural information for the beam. They will then visit another sheet to glean information about whether the distribution and location of electrical receptacles meet the requirements. Projects that might seem easier on the surface because they are repetitive, such as hotels or apartment buildings, can have subtle differences between otherwise identical units, require the examiner to have a seemingly inexhaustible capacity for focus. I am sure there are some people who would not find this tedious—and I hope they are all plans examiners—but I am certainly not one of them.
Could AI do this? Probably, and at the rate things are going, it may well be doing these tasks already. Some things are cut-and-dried in the code—for example, beam sizing can be checked by cross-referencing tables—but others are not so clear. What if the beam is a nonstandard size of wood or steel? Here, AI may be able to glean the required structural calculations and crunch the numbers. The answer to what happens if it's wrong is simultaneously simple and complicated. The building might fall down. If it does, whose fault is it? Other things require more nuance. Many aspects of building codes, such as ensuring beams are sized adequately, are matters of life safety. This is the case with the electrical code, and in particular several provisions that pertain to the location and type of outlets in kitchens and bathrooms. They are intended to prevent the unfortunate consequences of bathing with a hairdryer or drunkenly dunking your toaster, or a curious child pulling a simmering slow cooker off the countertop. The code seems clear. When looking at plan drawings, however, you start to appreciate how designing and remodeling spaces requires thinking not only about the intent of the code but also about the likely use (or misuse). This is where the best use of AI may simply be to flag an area of concern: Hey, human! This needs your attention.
Not coincidentally, this is how AI is being developed for critical uses, such as document classification and lethal military actions (e.g., target tracking). When applied this way, the question of whether to use AI becomes much more a question of how it should be used. It also shifts the nature of the discussion from a binary discussion about whether or not a given technology should be adopted to a much more nuanced decision about how it should be designed. For instance, using building code tables results in beams and other parts of buildings routinely being oversized. Making a table, which makes the job easy for a human user, along with building in a margin of error, served us well in the past. But in a future of constrained resources and abundant calculative resources, perhaps we'd be better off letting AI run the numbers and suggest a smaller, cheaper, and less resource-intensive beam instead.
I think of design, loosely following Herbert Simon, as acting with the intention to influence the future by shifting what's existing to what's preferred [2]. I am also partial to the idea, advanced by Richard Boland and Fred Collopy, of a designerly way of thinking. It means acknowledging that, even though we are being presented with a limited set of choices— let's call them A, B, and C—the best option might be D, or even X or Z, or perhaps ‹°‡&# [3]. Here, it is helpful to examine the training algorithms that influence the decisions AI is capable of making. Because it's trained on what has happened so far, AI is best suited to choose between A, B, or C. Clever programming or prompts might get it to identify close-range alternatives D, E, or even F. But finding Z and certainly ‹°‡&#—economic and elegant ways to solve problems or lightning bolts of innovation? That's a task for humans working together with trust and focus.
Do we want our plans examiners to use AI to count outlets? Would it be a better use of their time to identify and swiftly permit projects that solve bigger problems, or to utilize AI to help designers and builders find ways to reduce material and energy waste? The plea here is to not let our governmental processes (not to mention civil servants) become mired in additional layers of AI-enabled complexity, regulation, and compliance. Rather, let's remember that utopian dream, and find more opportunities to liberate people from the mind-numbing task of counting electrical outlets and looking up beam sizes. Housing affordability, extreme heat events, flooding, ecosystem stress, and other problems that confront us and those working on our behalf in government are inherently complex and interconnected. It's going to take human intelligence and a whole lot of working together to get to a future that is truly better for everyone.
1. Stites, M.C., Howell, B.C., and Baxley, P. Assessing the impact of automated document classification decisions on human decision-making. Human Factors and Simulation 83 (2023), 254–264.
2. Simon, H.A. The science of design: Creating the artificial. Design Issues 4, 1/2 (1988), 67–82.
3. Boland, Jr., R.J. and Collopy, F. Design matters for management. In Managing as Designing, R.J. Boland and F. Collopy, eds. Stanford University Press, Stanford, 2004.
Jonathan Bean is director of the Institute for Energy Solutions and an associate professor at the University of Arizona. He studies taste, technology, and market transformation. [email protected]
Copyright held by owner/author
The Digital Library is published by the Association for Computing Machinery. Copyright © 2024 ACM, Inc.
Post Comment
No Comments Found