Policymakers have tools available to mitigate the resulting economic instability and insecurity, whenever and wherever it arises.
There’s a tussle over the future of AI regulation.
One camp insists that “x-risk,” or existential risk, warrants the preponderance of regulatory focus. Another camp demands that privacy be the primary concern. A third cohort wants climate impacts to rise to the top of the agenda.
With U.S. politicians and agency officials unwilling to take a side, the National Institute of Standards and Technology (NIST) recently issued a “profile” on the risks generated by the research, development, deployment, and use of generative artificial intelligence (AI). Rather than concentrate on a small set of risks, NIST seemingly appeased each of the warring camps.
The NIST profile covered 12 risks, from chemical and biological to data privacy and harmful bias. Shockingly absent from the profile—“j-risk” or job risk.
J-risk is not a future concern. Americans previously employed in meaningful work have already been displaced by AI. Few signs suggest this trend will abate. Most evidence suggests it will accelerate.
AI will replace American workers—what’s less certain is when, how, and to what extent. Policymakers can avoid j-risk’s worst trend models only through the development of robust and novel social security programs aimed at displaced workers.
J-risks have been given insufficient attention in AI policy debates. Labor markets will continue to experience unexpected and significant disturbances as AI continues to advance. Rather than place excess hope on some positive economic forecasts coming true or to assume a reactive regulatory posture, lawmakers should consider pursuing anticipatory governance strategies. Two courses of action can further this approach: one, gathering more information on AI’s effects on labor and, two, creating more responsive economic security programs. These efforts would not only reduce the uncertainty surrounding j-risks but also stem the resulting long-term harms.
The Scope and Severity of J-Risk
The rate, timing, and location of job displacement by generative AI is beyond even the most attentive journalist or economist. But it’s clear that AI has already disrupted specific labor markets. Just ask any one of the thousands of video game industry workers displaced by generative AI.
Unlike the video game characters they create, developers do not simply respawn at a new company when their current job or employer gets eliminated. According to a recent Wired investigation, in 2023, more than 10,000 industry workers were laid off due, in part, to increased adoption of generative AI. About halfway through 2024, that number already surpassed 11,000. A portion of those losses reflect Microsoft’s decision to ax two gaming studios, Tango Gameworks and Alpha Dog Games.
Whether generative AI deserves the majority of the blame for each of those disappeared jobs lies beyond the time and capacity of even the best investigative journalists. What’s clear is that AI has transformed a major industry. A survey of industry workers unsurprisingly revealed broad opposition to increased use of AI in game production and, perhaps even more unsurprisingly, indicated that about half of gaming companies had already adopted AI in some form.
Despite these negative impacts, the AI genie likely won’t go back into the gaming industry bottle. Industry adoption of a new technology tends to be a one-way ratchet. Theoretically, lawmakers could have banned the spinning jenny; in practice, whatever technology or practice increases productivity has withstood regulatory restrictions. Sure, “organic” or “handmade” production techniques may claim a fraction of the market, but regulators have left it to consumers to decide the merits of one type of good over another. The proliferation of AI within gaming companies seems unlikely to dissipate. Wired’s reporting uncovered emails from individuals from around the gaming industry—their contents made clear that the spread of AI within the biggest studios as well as the smallest upstarts means that “an already precarious industry [is] getting further squeezed by the rise of AI.” As Wired’s Brian Merchant summarizes:
Managers at video game companies aren’t necessarily using AI to eliminate entire departments, but many are using it to cut corners, ramp up productivity, and compensate for attrition after layoffs. In other words, bosses are already using AI to replace and degrade jobs. The process just doesn’t always look like what you might imagine. It’s complex, based on opaque executive decisions, and the endgame is murky. It’s less Skynet and more of a mass effect—and it’s happening right now.
The upshot of AI-driven job displacement across the economy has drawn substantial scholarly debate. Some, like economist David Autor, insist that AI represents another wave of technological disruption—one that may disrupt the labor market but produces enough new jobs to more than make up for the losses. Autor goes so far as to anticipate that AI may actually increase the well-being of middle-class workers by empowering less experienced workers in occupations like the law and computer engineering to perform as well as their more skilled colleagues. The resulting bump in productivity should drive wages up as well, the argument goes.
However, the thorns of empirical analysis have punctured that rosy picture. Some analysis suggests that AI-driven productivity gains tend to favor high-income workers, further diminishing the economic standing and security of lower wage workers.
A long-term view of this research reinforces AI as an engine of inequality more so than an opportunity machine. The thinking goes that as AI goes beyond taking over single tasks and eventually replaces whole jobs, there may not be a requisite increase in other opportunities. In short, “it seems plausible that leading AI systems could start to eat up a larger share of production in occupations where they are [currently] deployed,” according to Sam Manning of GovAI. If that is indeed the case, then “[i]n sectors where AI automation significantly reduces production costs,” Manning forecasts that “businesses may choose to reduce their workforce if consumer demand for their products or services doesn’t increase enough to offset the productivity gains.” The net effect will be fewer jobs and lower wages in those affected industries.
Mitigation of such mass effects, rather than a sole focus on innovation, should guide AI policy. Though the timing, distribution, and severity of negative AI effects on labor may be difficult to calculate, their certainty at any scale merits a reexamination of the support available to workers on the emerging technology’s losing end.
A Reminder of Failed Forecasts of the Distribution of Benefits From Globalization
Globalization, a process marked by a surge in exchange of goods and services, people, and technology, across borders, offers a cautionary tale for those clinging to AI’s brighter futures. Only with the benefit of hindsight are experts such as Anne Applebaum now able to recognize that they may have overestimated the benefits of globalization and underestimated the scale, scope, and staying power of its drawbacks.
This was not the way things were supposed to play out. World leaders touted globalization as an unstoppable force for economic good. Nevertheless, globalization contributed to entrenching and expanding the well-being of already well-off Americans while the majority of American workers helplessly watched their wages wither and jobs vanish. More than 3.7 million Americans lost their jobs between 2001 and 2018 due to effects of globalization.
Globalization’s net effect on the average American is not a closed case. There are strong arguments that the alternative—comparatively less engagement with foreign trading partners—would result in worse outcomes for more Americans. Regardless of the outcome of the debate over the long-term effects, there is no questioning that the short-term effects of globalization were mismanaged by policymakers. Rural communities across America have faced pronounced economic uncertainty in light of shifting trade policies and variable government support for displaced workers. Galesburg, Illinois, serves as an unfortunate case study. The city lost a Maytag factory in the early 2000s. It has yet to recover. Between 1999 and 2013, the median household income in the small town dropped by 27 percent. Many other towns could share a similar story and bemoan the absence of a recovery.
Other communities would have also benefited from proactive policies to stem the anticipated and fairly predictable economic pain wrought by globalization. The contraction of America’s manufacturing sector has been particularly hard on Black Americans. In the two decades and two years between 1998 and 2020, Black workers lost nearly 650,000 manufacturing jobs. The substitute jobs occasionally made available to those workers rarely carried the same benefits and pay. The burden placed on those workers has saddled their entire communities, as well. Tax revenue has diminished. Populations have shrunk. Other policy Band-aids have yet to realize their intended effects. Worker retraining programs have a spotty track record of aiding workers and reinvigorating the broader community.
Policymakers seem not to have fully learned these lessons. AI, like globalization, is destined to introduce economic instability and insecurity. The uncertainty as to when, where, and for how long that economic turbulence will occur does not excuse congressional inaction.
Policy Solutions
Congress has a menu of options available to stem the worst effects of AI on the labor market and communities in economically precarious positions. A full review of the ins and outs of these options lies beyond the scope of this piece. The high-level legislative agenda should turn on two tasks: first, developing the tools to more accurately and quickly determine how AI will affect certain industries and, second, ensuring Congress has the requisite authority to implement responsive policies.
Measuring AI Job Loss
Methodological flaws beset the current popular means to determine whether a job or industry may experience AI-driven turbulence. The economists and scholars working to identify these professions have relied on imprecise and artificial measures. In particular, a hyperfocus on discrete tasks and specific measures of productivity may make current estimates inaccurate because of their omission of key factors related to the use of AI in the workplace. A more holistic assessment, writes Sam Manning of GovAI, would include “barriers to AI adoption, changes in demand for workers’ outputs, the complexity of real-world jobs, future AI progress, [and] new tasks and new ways of producing the same outputs.”
If Congress wants to more accurately and quickly identify industries at risk of AI upheaval, lawmakers can start by investing in detailed, long-term studies of the use of AI in the workplace. The Bureau of Labor and Statistics (BLS) seems well suited to take the lead on this effort. As of now, the BLS has yet to take on such research. The “Monthly Labor Review” blog operated by the BLS last addressed displacement in May 2023. Its consideration of technological change has been slightly more frequent but far from sufficient when it comes to informing Congress’s regulatory agenda. Though the BLS produces and updates numerous data tools, such as COVID-19 Economic Trends, no such tool exists for AI-caused or AI-driven displacement.
Congressional Authority
Current unemployment insurance programs and retraining programs do not seem responsive to the speed and scope of job loss posed by AI. Absent emergency measures, the unemployment insurance program will fail to support those bearing the brunt of rapid job loss. For one thing, it excludes many individuals, such as app-based workers, who may require assistance. Additionally, it provides meager support to those who do qualify. These shortcomings are particularly problematic because of the potential that sudden advances in AI will simultaneously render many individuals jobless across a wide scope of industries.
Those same advances in AI may render some retraining futile. The sorts of alternative jobs that displaced individuals may train for may have tasks and responsibilities akin to those of the recently eliminated profession. A displaced video game designer, for instance, may no longer find roles producing 2D content and instead train to design 3D content. That new skill may give the worker a leg up over AI for a limited period of time—a few months or years before a new AI model takes over that type of work as well. Current retraining programs do not have a strong track record of the sort of ongoing training required in a race against AI. Band-aid solutions, such as increasing funding for those programs, may not be enough.
Rather than attempt to have a large fraction of the American labor force outsprint AI, Congress may consider nudging workers into different fields of work entirely. This may be the right time for a national service program that provides adults with meaningful and sustained opportunities to contribute to their communities. Such opportunities may allow displaced workers to refine AI-resilient skills, such as managing and leading others or exercising tremendous creativity, while also doing good for their neighbors and community.
A more immediate and measurable step may be to ensure displaced workers have access to emergency financial support in the wake of swift and broad layoffs. Development of a program akin to the economic impact payments relied on during the coronavirus pandemic may similarly prevent affected individuals on the lower end of the income spectrum from sliding too far down the economic ladder. This emergency displacement fund could be created, in part, by taxes on the AI labs inducing these labor disturbances, on corporations rapidly adopting AI, or some combination of the two.
***
Prioritization of this regulatory agenda would not have to come at the expense of other AI governance approaches. The same tools and experts involved in monitoring the employment effects of AI may aid in detection and prevention of other AI risks. Likewise, a more robust economic emergency relief program would come in handy upon the manifestation of other AI risks, such as in the event of a public health crisis brought on by deployment of an AI-designed bioweapon.
Advances in AI will continue to challenge policymakers by forcing them to prioritize certain concerns and risks above others. Not all risks, though, warrant the same level of attention. Mitigatory steps to address worst-case AI scenarios, for instance, may have marginal utility in addressing other bad outcomes. Creation of AI “kill-switches,” by way of example, will be of little import when AI threatens the livelihood of an entire profession. Conversely, steps to prepare for more likely and more well-understood AI risks, like labor displacement, may increase the overall resilience of the government and public. Action on j-risks may also attract bipartisan support, an important attribute given regulatory stagnation on the Hill.
How best to address j-risks remains an open question—but ignoring a problem will never lead to its solution.
– Kevin Frazier is an Assistant Professor at St. Thomas University College of Law. He is writing for Lawfare as a Tarbell Fellow. Published courtesy of Lawfare.
Leave a Reply