- Introduction
- I. Ethnonationalism in the United States
- II. How Trump AI Policies Advance Ethnonationalism
- III. Containing Ethnonationalism in Government AI
- Conclusion
Introduction
Artificial intelligence has become a new front in the legal and political contest over American identity. On his first day back in office, President Donald J. Trump repealed Executive Order 14,110—Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence—which required federal agencies to implement safeguards against algorithmic discrimination.[1] Three days later, he signed Executive Order 14,179—Removing Barriers to American Leadership in Artificial Intelligence—which cast equity protections as ideological constraints on innovation and reoriented federal AI policy around the goal of global technological dominance.[2] In July 2025, the Administration released Executive Order 14,319—Preventing Woke AI in the Federal Government—which not only ignored well-documented concerns about bias but affirmatively barred agencies from procuring AI systems fine-tuned to reduce racial bias.[3] Five months later, the Administration issued Executive Order 14,365 to challenge state AI laws it deemed “onerous” and to withhold federal broadband funding from such states, explicitly targeting state prohibitions on algorithmic discrimination.[4]
These executive orders would be troubling even if AI were speculative or peripheral to government decision-making. But they are unfolding against a backdrop in which automated systems already structure routine exercises of state power—often at scale, and often in contexts in which error or bias can impose durable harm.
Federal agencies already use prediction and scoring systems across thousands of government functions.[5] Customs and Border Protection’s Automated Targeting System assigns risk assessments based on travel and cargo data,[6] and other Department of Homeland Security components use AI for screening, biometric identification, and fraud detection.[7] The Transportation Security Administration prescreens passengers through its data-driven Secure Flight program,[8] the IRS uses AI-assisted analytics to help identify returns and claims for audit,[9] and the Department of Health and Human Services deploys AI to detect fraud.[10] Components within the Department of Justice (“DOJ”) likewise deploy AI-enabled systems, including facial recognition tools used to generate investigative leads, FBI algorithms that triage threat reports for human review, and risk-prediction models that inform inmate security classification and recidivism-related programming.[11]
Agencies are also integrating evaluative and generative AI into routine administrative work. The State Department has used AI to assist Foreign Service Selection Boards in evaluating personnel for promotion and reassignment.[12] U.S. Citizenship and Immigration Services deploys an AI-based chatbot to communicate with applicants,[13] and the U.S. Department of Veterans Affairs has rolled out generative-AI tools to help employees draft emails and summarize documents.[14] In August 2025, the General Services Administration negotiated government-wide access to ChatGPT at nominal cost—accelerating the diffusion of large language models across agencies even as the Administration dismantled equity-centered guardrails for their procurement and use.[15]
Together, these developments mark a decisive shift in federal AI governance. The Trump Administration’s orders inaugurated a systemic rollback of civil rights safeguards across the federal government. The Office of Management and Budget (“OMB”) replaced detailed civil rights-oriented guidance on AI with vaguer invocations to mitigate risks to “civil liberties” and “civil rights,” while rescinding requirements that agencies assess disparate impacts, proxy discrimination, and demographic disparities.[16] The National Institute of Standards and Technology (“NIST”) deleted references to “AI fairness” from its agreements with consortium partners and halted its support for research on racial discrimination in AI models.[17] The Equal Employment Opportunity Commission (“EEOC”) and the Department of Labor withdrew guidance for employers and contractors on avoiding algorithmic discrimination in hiring,[18] and other agencies took similar actions.
This Article argues that these measures were not merely deregulatory—they reflect a broader political project. The Trump Administration’s AI policy advances a vision of digital ethnonationalism: an effort to embed racial and cultural exclusion and hierarchy into the algorithmic infrastructure of public life.[19] AI systems trained on biased data have already reproduced discrimination in areas such as employment, housing, lending, voting, and policymaking.[20] Because these systems typically address diversity in mathematical ways by defaulting to averages or dominant patterns, they often homogenize culture and suppress different perspectives.[21] Left unchecked, these systems will entrench historical inequities and narrow public discourse. And because the United States remains the global leader in AI development, these harms are not confined to domestic institutions—they are exported abroad.[22]
Ethnonationalism—an ideology that defines national belonging in terms of shared ancestry, culture, and language and often justifies state policies favoring a dominant ethnic group—has reemerged globally in the past decade.[23] It has been fueled by demographic change, cultural and economic anxiety, nativism, and racial resentment.[24] In the United States, this movement has taken the form of increasingly explicit efforts to banish racial diversity from the center of national political life. While ethnonationalism has shaped policy on immigration, education, history, and civil rights,[25] its influence on emerging technologies—particularly AI governance—has largely escaped sustained legal analysis. This Article seeks to fill that gap.
In its second term, the Trump Administration has openly embraced an exclusionary vision of the nation. It has dismantled Diversity, Equity, and Inclusion (“DEI”) infrastructure, restricted language access, purged public institutions of narratives about racial subordination, and attempted to reinterpret constitutional guarantees to narrow the bounds of citizenship.[26] Often overlooked, however, are its interventions in AI policy—ostensibly race-neutral, but substantively aligned with a campaign to entrench cultural dominance by redefining neutrality itself.[27] While the Trump Administration’s AI policies also exacerbate harms related to gender, gender identity, sexual orientation, and disability, this Article centers on race and ethnicity because of their distinctive role in American legal development and their centrality to contemporary ethnonationalist efforts to define membership within the American polity.
This Article contends that the core values of a racially inclusive democracy—fairness, pluralism, authenticity, and autonomy—should guide the development of AI law. It proposes the Equitable AI in Government Act, a statutory framework designed to embed those principles into the use and procurement of AI by federal agencies and contractors. Drawing on democratic theory, civil rights law, and emerging but insufficient regulatory frameworks in the European Union and several U.S. states, the Act imposes baseline obligations as well as heightened scrutiny for high-risk uses of AI. The proposal responds directly to the Administration’s use of procurement and administrative guidance as tools of ideological enforcement. Even if Congress fails to act, the Act’s provisions are structured for adoption by states and localities through legislation or executive order. While the Act does not address every threat posed by private-sector AI, the government is a logical starting point. It bears a distinct obligation to serve a diverse public and has the institutional authority to shape norms through standard-setting.
This Article makes three primary contributions. First, it offers the first comprehensive analysis of how the Trump Administration’s AI policies advance its broader agenda to suppress racial and ethnic diversity, equity, and inclusion through executive orders, OMB memoranda, and procurement mandates. Second, it provides the most comprehensive account to date of how the Trump Administration’s failure to confront bias, homogenization, deception, and manipulation in its use and acquisition of AI poses serious risks to democracy at this pivotal stage of AI’s evolution.[28] Third, it proposes a legislative framework that can guide policymakers as soon as political conditions allow for meaningful reform.
The stakes extend beyond any single presidential administration. Longstanding institutional failures—rooted in polarization, legislative dysfunction, and judicial resistance to racial inclusion—have left the United States without stable frameworks to govern two of the most consequential forces shaping its future: emerging technologies and accelerating racial and ethnic demographic change. In the absence of statutory consensus in either domain, the executive branch has increasingly shaped national priorities through procurement, regulation, funding, and model guidelines. This Article calls for a more durable legal architecture that ensures AI is not simply faster or more efficient but more representative and inclusive.
Part I situates the current moment within the broader historical struggle between ethnonationalism and democratic inclusion. Part II defines artificial intelligence, details the ways in which unregulated AI undermines racially inclusive democracy, and analyzes how the Trump Administration’s AI policies advance its broader effort to marginalize racial diversity in public life. Part III introduces the Equitable AI in Government Act, details its significance, and defends its viability and constitutionality. Together, these Parts aim to reorient the legal imagination toward an AI future that reflects—not erases—the full diversity of the American polity.
I. Ethnonationalism in the United States
This Part traces the recent resurgence of ethnonationalism in the United States. It examines how American law and policy have long defined political belonging in racial and cultural terms—often excluding communities of color, subordinating them, or pressuring them to assimilate—and how the second Trump Administration has resurrected this project.
A. Ethnonationalist Origins and the Rise of Racially Inclusive Democracy
Ethnonationalism is a political ideology that seeks to limit political rights to members of a single or a limited number of racial, ethnic, or cultural groups.[29] Democracy is reserved for those deemed part of the “true” nation, while others are excluded or pressured to assimilate.[30] Cloaked in appeals to freedom, tradition, and national identity, ethnonationalist regimes seek to marginalize the political liberties (e.g., voting, association, speech, due process) of out-groups through laws, policies, and cultural narratives.[31] These regimes often frame racial and cultural diversity—as seen in immigration, schools, workplaces, and neighborhoods—as a threat to national cohesion.
Ethnonationalism was a defining feature of the origins of the United States. The original Constitution subsidized race-based slavery,[32] and nearly every state limited voting to white males or would later do so.[33] The Naturalization Act of 1790 limited naturalized citizenship to “free white person[s],”[34] and, along with similar laws, effectively gerrymandered the racial composition of the U.S. population in a way that ensured white demographic and political majorities that persist to this day.[35] Supreme Court decisions ratified this foundation. For example, the Court ruled that the doctrine of conquest gave the United States exclusive rights to Indigenous lands, and that the Constitution did not recognize even free Black people as citizens.[36]
Reconstruction marked the first widespread effort to dismantle ethnonationalism in the United States, but it was short-lived. The Reconstruction Amendments and federal legislation enfranchised Black voters who elected Black Americans to over two thousand local, state, and federal offices.[37] By the late 1870s, however, federal withdrawal allowed white Southern Democrats to suppress Black voters and restore white rule through violence and voting restrictions.[38] The federal government deepened its retreat from racially inclusive democracy when, responding to white backlash against the growing prominence of educated Black Washingtonians and calls for government “efficiency,” the Wilson Administration systematically purged Black federal employees and resegregated federal workplaces.[39]
Although the Supreme Court affirmed in 1898 that the Fourteenth Amendment guarantees citizenship to all born in the United States (including people of color),[40] in the early twentieth century it repeatedly held that Congress had the power to exclude non-white immigrants from naturalized citizenship.[41] The 1924 immigration quotas heavily favored western and northern Europeans, and completely banned Asian immigrants.[42] Congress also delayed statehood for territories like Alaska, Arizona, Hawaii, New Mexico, and Oklahoma, in part because their populations were not sufficiently white.[43] The Court sanctioned the denial of constitutional rights in Puerto Rico and other “distant possessions” by citing “differences of race, habits, laws, and customs” of their inhabitants.[44] It also ruled that Indigenous people born on Tribal lands in the U.S. were not entitled to birthright citizenship and could be denied voting rights.[45] The federal government sought to eliminate Tribal language, religion, and identity during the “Termination Era” of the 1950s and 60s through various assimilationist measures.[46]
Congress enacted landmark statutes in the 1960s that marked a significant shift away from ethnonationalism, and toward establishing a racially inclusive democracy.[47] The Civil Rights Act of 1964 ended de jure racial segregation, prohibited employment discrimination, and prohibited discrimination by programs receiving federal funds.[48] The Voting Rights Act barred literacy tests, authorized litigation against discriminatory voting practices, and required jurisdictions with histories of discrimination to preclear changes to their voting practices with federal officials before implementing the practices.[49] A decade later, Congress expanded the law to protect language minority populations.[50] The Immigration and Nationality Act of 1965 explicitly barred racial discrimination in the immigration process, repealed quotas that favored northern and western Europe,[51] and significantly increased the share of immigrants from Asia and Africa.[52] Finally, the Fair Housing Act of 1968 prohibited discrimination in housing opportunities, ending de jure racial redlining (even if de facto exclusionary policies and practices continue to this day).[53]
These statutes catalyzed a shift toward racially inclusive democracy.[54] In 1960, 75% of immigrants living in the United States were from Europe; by 2022, that figure had dropped to 10%.[55] People of color grew from 15% of the U.S. population in 1960[56] to 41% by 2024,[57] and are projected to become a majority by 2045.[58] Representation in politics has changed as well. In 1960, for example, fewer than 3% of U.S. House members were people of color.[59] Today, that figure stands at 28%.[60] A racially diverse coalition twice elected the nation’s first Black president.[61] Americans tend to agree that institutional change is required to overcome racism,[62] and a sizable majority believe that America’s diversity makes us stronger.[63] Racial disparities in income and poverty rates have also decreased since 1960, although they remain significant.[64]
But this transformation triggered a backlash. Race became a dominant force in partisan affiliation and voting patterns.[65] Political scientist Ashley Jardina found that 30% to 40% of white Americans identify as being white—that they “possess a sense of racial identity and are motivated to protect their group’s collective interests and to maintain its status.”[66] She found that white identity is “becoming a more salient force in American politics”[67] because many people feel as though they are losing power and status.[68] Jardina and Robert Mickey assert that:
Some whites’ opposition to democratic principles is rooted, at least in part, in a rejection of racial pluralism; concerns regarding the political claims of racial and ethnic minorities; and the belief that the democratic system works better for people of color, whom they consider less deserving of its benefits.[69]
This backlash has shaped several state laws and federal policy proposals, and set the stage for the second Trump Administration.[70]
B. The Second Trump Administration’s Policies to Dismantle Racially Inclusive Democracy
In its first year, the second Trump Administration launched a sweeping campaign to suppress racial diversity and entrench ethnonationalism across the federal government. Through executive orders, agency guidance, and formal rulemaking, it sought to dismantle civil rights protections, erase data on racial disparities, defund equity initiatives, reinterpret foundational constitutional principles, and police history, language, and the nation’s ethnic composition.[71] This subpart catalogs these actions to illustrate the Administration’s broader ethnonationalist framework and lay the groundwork for Part II’s analysis of how its artificial intelligence policies further that project.[72]
These efforts rest on an ideological premise: that race-conscious efforts aimed at acknowledging and addressing systemic discrimination and building a racially inclusive democracy are themselves forms of unlawful bias and political favoritism. Despite overwhelming evidence of racial disparities and explicit and implicit bias in government, industry, and education,[73] the Trump Administration portrayed diversity initiatives as threats to “merit” and “unity”—casting white Americans as the victims of an imagined “identity-based spoils system.”[74] Many of the Trump policies rest on rhetorical strawmen—alleging widespread use of “unlawful” diversity, equity, and inclusion practices—without identifying specific legal violations.[75] The Administration advanced its political objectives through an overly broad and erroneous reading of the Supreme Court’s decision in Students for Fair Admissions v. Harvard, a case that invalidated race-based affirmative action in university admissions policies.[76] That case did not hold that awareness of the racially disparate impact of existing policies and race-neutral reforms that reduce racial disparities are illegal.[77] Nonetheless, the Administration has treated the mere consideration of racial disparities as suspect—even when such consideration is used to identify unfair and exclusionary practices.
On its first day in office, for example, the second Trump Administration announced it would eliminate all federal government DEI programs and policies, including Chief Diversity Officer roles, Equity Action Plans, and equity-related grants or contracts.[78] It dismantled or defunded key agencies and initiatives that address structural inequality, such as the National Institute on Minority Health and Health Disparities,[79] the Minority Business Development Agency,[80] the Community Development Financial Institutions Fund,[81] the Office of Federal Contract Compliance Programs,[82] the EPA’s Office of Environmental Justice,[83] and the National Telecommunications and Information Administration’s Digital Equity broadband grant program.[84]
Investigations or the threats of investigations also chilled lawful diversity programs. The Secretary of Education issued a “Dear Colleague” letter warning federally funded schools that diversity, equity, and inclusion practices in admissions, hiring, scholarships, housing, or training could violate Title VI and jeopardize federal funding.[85] The Department of Education initiated investigations into five universities for offering scholarships to students enrolled in Deferred Action for Childhood Arrivals (“DACA”)—a program that defers deportation for certain undocumented individuals brought to the United States as children—alleging that such programs discriminated against U.S.-born students in violation of Title VI’s prohibition on national origin discrimination.[86] Other executive orders and agency directives threatened to condition federal student aid eligibility to enrollment at colleges and universities accredited by entities that do not consider diversity,[87] and signaled potential limits on federal loan forgiveness for employees of immigration and racial justice organizations.[88]
These efforts to chill diversity programs extended outside of education. The Administration cited diversity policies of four law firms in revoking the security clearances of their employees and barring them from federal buildings.[89] The Federal Communications Commission opened an investigation into Comcast and NBCUniversal to ensure that they were “not promoting invidious forms of discrimination” through their promotion of diversity,[90] and threatened to block merger approvals for companies with diversity programs.[91] T-Mobile, Verizon, and other corporations ended their diversity initiatives to secure such approvals.[92]
The Administration further institutionalized this campaign against diversity initiatives through a September 2025 Office of Management and Budget memorandum that—by adopting the DOJ’s expansive proxy-discrimination framework—treats common tools for identifying and addressing racial exclusion as presumptively unlawful and directs federal agencies to eliminate funding to entities that rely on them.[93] The memorandum treats a wide range of equity-oriented practices—including the use of demographic data, diverse candidate pools, socioeconomic status, first-generation status, underserved geographic areas, cultural competence, diversity statements, and other facially neutral measures—as unlawful tools or proxies for protected characteristics.[94] It warns that entities receiving federal funds, including state and local governments, educational institutions, and public and private employers, risk losing funding if they pursue programs designed to increase demographic representation.[95] The DOJ reportedly pursued investigations of diversity programs at major companies that are federal contractors in the automotive, pharmaceuticals, defense, utilities, and technology sectors under the False Claims Act, asserting that companies holding a federal contract while considering diversity in employment are committing fraud against the federal government and risk being liable for treble damages.[96]
The Administration also dismantled core civil rights enforcement tools. Executive Order 14,281 characterized disparate-impact liability as pressuring actors to consider race and to engage in racial balancing. The Administration directed agencies to “eliminate the use of disparate-impact liability in all contexts to the maximum degree possible,”[97] to reconsider pending proceedings and consent decrees relying on the doctrine, and explore federal preemption of state-level disparate impact protections.[98] In December 2025, DOJ finalized a rule rescinding the disparate-impact provision of its Title VI regulations, narrowing federal oversight of federally funded entities by foreclosing disparate-impact enforcement across domains including education, health care, and social services.[99] The EEOC also stopped investigating disparate-impact charges,[100] and the National Credit Union Administration removed disparate-impact analysis from its examinations and fair lending guidance.[101]
The Administration also moved to erase the data needed to detect and remedy discrimination. It rescinded a 1965 executive order that facilitated investigations into federal contractors’ potentially discriminatory employment practices.[102] It repealed guidance encouraging schools to collect and analyze racial data on student discipline and to consider whether disparities might indicate unlawful discrimination.[103] The Department of Education signaled an intent to reduce states’ obligations to report on school districts with high rates of students from particular racial groups who are placed in special education or restrictive settings.[104] The Administration also removed public tools like the Environmental Protection Agency’s EJScreen, undermining the ability of local communities, researchers, and public officials to detect and respond to racial disparities in environmental burdens.[105]
Enforcement capacity suffered as well. Following the policy overhaul, roughly three-quarters of the attorneys in the Justice Department’s Civil Rights Division left the agency through resignations, reassignments, or deferred departure agreements.[106] The Administration also dismantled or severely cut back offices that ensured agencies and those they regulate complied with civil rights protections, including the Department of Homeland Security’s Office for Civil Rights and Civil Liberties,[107] the Social Security Administration’s Office of Civil Rights and Equal Opportunity,[108] the Department of Education’s Office of Civil Rights,[109] and the Veterans’ Administration’s Office of Equity Assurance.[110]
Invoking the need for “national unity,” the Administration also moved to sanitize American history and impose a singular national identity. Framing exhibits about America’s racial history as being motivated by a “corrosive ideology” that “deepens societal divides and fosters a sense of national shame,”[111] the Administration directed the Smithsonian to purge content that allegedly advanced an “improper ideology” and “divide[s] Americans based on race.”[112] The White House escalated implementation by demanding internal Smithsonian records and exhibition materials, while publicly signaling that continued federal support was contingent on compliance.[113] It ordered the restoration of Confederate monuments and the renaming of military bases that had previously been stripped of their Confederate associations.[114] The National Park Service posted signs “asking visitors to offer feedback on any information that they feel portrays American history and landscapes in a negative light.”[115]
The Administration banned federally funded schools and teacher-training programs from teaching “[d]iscriminatory equity ideology,” characterized such content as “anti-American” indoctrination, and mandated that agencies promote “patriotic education” defined in part as reverent and “ennobling characterizations of America’s founding and foundational principles.”[116] Military educational institutions were required to teach “that America and its founding documents remain the most powerful force for good in human history,” and were prohibited from teaching “un-American, divisive, discriminatory, radical, [or] extremist” concepts.[117] The Secretary of Defense ordered the removal of books about diversity and anti-racism from military libraries, and the Naval Academy purged 381 titles, including those on the Holocaust, civil rights, racism, Black soldiers, and the killing of Trayvon Martin.[118] Materials about Nazi ideology and Confederate history were not targeted, and books like Mein Kampf (by Adolf Hitler), The Camp of the Saints (novel envisioning Western takeover by immigrants from developing countries), and The Bell Curve (asserting that Black people genetically are less intelligent than whites) remained in circulation.[119] Most of the removed titles were later returned to the shelves—underscoring the ill-defined nature of centralized ideological review rather than ordinary collection management.[120] The Department of Defense scrubbed its website of references to Jackie Robinson, the Navajo Code Talkers, the Tuskegee Airmen, and Black, Latino, and Asian American veterans, and later restored some pages after public backlash.[121]
The Administration further advanced its ethnonationalist project by elevating English as the sole official language of the United States.[122] It revoked a Clinton-era directive requiring that federal agencies and their grantees make their services accessible to individuals with limited English proficiency.[123] Justifying the policy, the Administration claimed that an official national language would promote a “shared American culture” and a “more cohesive and efficient society.”[124] The DOJ subsequently issued a memorandum suspending LEP.gov—an interagency hub that provided tools to help agencies ensure meaningful language assistance—and directing all agencies to rescind prior limited English-proficiency guidance, to phase out “non-essential multilingual services,” and, when a multilingual service is deemed “mission critical” to include a disclaimer that English is the official language on published non-English material.[125] The Department of Education rescinded guidance detailing schools’ obligations to serve English learners and limited-English proficient parents,[126] other agencies issued new policies that diminished obligations to provide services to limited-English proficient individuals.[127]
Echoing past efforts to favor white immigrants and limit non-white immigrants, the second Trump Administration adopted policies that tilted the nation’s demographic trajectory toward a whiter population.[128] On its first day in office, the Administration attempted to reinterpret the Fourteenth Amendment to deny birthright citizenship to children born in the United States to noncitizen mothers, unless the father is a citizen or one of the parents is a lawful permanent resident[129]—an interpretation squarely at odds with longstanding precedent.[130] That same day, it suspended refugee admissions,[131] citing a need “to admit only those refugees who can fully and appropriately assimilate.”[132] It should be noted that in recent years, over ninety-five percent of refugees came from Africa; Near East, East, and South Asia; and Latin America.[133] A month later, the Administration carved out an exception for white Afrikaner “refugees” from South Africa, claiming they faced racial persecution.[134] The Administration operationalized that preference by setting the fiscal year 2026 refugee ceiling at a record-low 7,500 while prioritizing admissions for Afrikaners.[135]
That “assimilation” rationale tracked the Administration’s broader national-security narrative that mass migration produces “civilizational erasure” abroad—warning that Europe is being transformed by migration policies and insisting that “[w]e want Europe to remain European.”[136] Domestically, this worldview also shaped immigration enforcement. Litigation over immigration raids documented federal enforcement practices that treated “apparent race or ethnicity” and “speaking Spanish or speaking English with an accent” as factors justifying stops, effectively operationalizing race and language as proxies for immigration suspicion.[137]
While other policies reflect ethnonationalist tendencies,[138] those described above demonstrate its core tenets: normalizing bias against people of color; treating consideration and collection of data of racial disparities as violations of a formalistic “colorblind” ideal; framing protections against discrimination as wasteful and “reverse racism”; casting the histories and languages of non-dominant communities as divisive and anti-American; and elevating ethnic, cultural, and linguistic assimilation as a national imperative. These same principles increasingly shape the Trump Administration’s policy approach to artificial intelligence.
II. How Trump AI Policies Advance Ethnonationalism
This Part asserts that artificial intelligence policy has become a central tool through which the second Trump Administration advances its ethnonationalist project—joining immigration, diversity, civil rights enforcement, education, historical memory, and language access as key arenas for consolidating cultural dominance. Rather than merely neglecting the risk AI poses to an inclusive society, the Administration has wielded its policymaking authority to dismantle safeguards intended to promote fairness in automated systems.[139] In its first year, the Administration issued executive orders and OMB memoranda and actively implemented its agenda at agencies to recast equity-focused AI governance as a form of ideological obstruction.[140]
This marks not just an ideological shift but a legal transformation. The second Trump Administration’s AI policies depart not only from the Biden Administration’s safeguards, but also from the first Trump Administration’s approach. In 2020, President Trump signed the AI in Government Act, directing OMB to issue guidance on “best practices for identifying, assessing, and mitigating any discriminatory impact or bias.”[141] That same year, he issued Executive Order 13,960, instructing agencies to ensure that their AI use complied with laws related to “civil rights” and “civil liberties.”[142] While these provisions were vague and weakly enforced, they at least acknowledged the relevance of civil rights. The executive orders issued in the first year of Trump’s second term abandon even that recognition.
Understanding these policies as advancing ethnonationalism does not require assuming cultural dominance is their sole purpose. Deregulatory and techno-industrial motives—such as accelerating data center construction—likely play a role. But these interests converge with ethnonationalist goals. For those whose liberty, livelihood, or identity are shaped by AI systems, the effect is the same: a digital state that refuses to see them.
After defining artificial intelligence, this Part identifies features of unregulated AI that make it especially effective at advancing ethnonationalist goals. Some, such as algorithmic bias, are well documented. Others, including cultural homogenization, synthetic deception, and behavioral manipulation, are less widely appreciated but pose equally serious threats to a racially inclusive democracy. The Part then details the Trump Administration’s executive actions and agency directives that removed safeguards against these harms and mounted an affirmative campaign against the development of inclusive AI systems designed to serve a diverse public.
A. Digital Ethnonationalism: Racial Harms from Unregulated AI
Before detailing the Trump Administration’s AI policies, it is essential to understand how unregulated artificial intelligence can advance ethnonationalist objectives. Although algorithmic bias is well documented, other challenges, including homogenization, deception, and manipulation, present equally serious threats to racially inclusive democracy. Each reinforces longstanding projects of racial and cultural marginalization and assimilation. Together, they risk embedding a narrow and hierarchical vision of American identity into the algorithmic infrastructure of public life—a phenomenon this Article terms digital ethnonationalism.
While digital authoritarianism describes repressive actors using digital information and communication technologies to surveil, manipulate, coerce, and censor populations,[143] digital ethnonationalism offers a different framework. It captures the ways AI tools can be used to advance a racially exclusionary political vision. It results from either the deliberate use of such digital tools to advance ethnonationalist goals, or the inadvertent design or deployment of such tools in ways that have the same effect.
People define AI differently. Mo Gawdat, former Chief Business Officer of Google X, has explained the distinction between traditional software and AI as the difference between giving a person step-by-step instructions to solve a puzzle—i.e., traditional programming—and telling them to “figure it out yourself”—i.e., AI.[144] This Article adopts a federal statutory definition of AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”[145] This includes systems with varying levels of human oversight—fully autonomous, partially autonomous, and non-autonomous—but excludes traditionally programmed systems with human-defined rules.[146] It also includes systems that mimic human reasoning, “learn” from data, approximate cognitive tasks like planning or decision-making, or act to achieve goals.[147] The definition encompasses subfields such as machine learning (including deep learning), reinforcement learning, transfer learning, and generative AI.[148]
These technologies do not merely reflect social structures, they shape them. When left unregulated, they can reproduce and intensify systemic inequalities in ways that align with and accelerate the broader ethnonationalist project of racial and cultural dominance and exclusion.
1. Bias
There is broad expert consensus that bias in AI systems exists and can be significant.[149] Studies have found bias in credit scoring, criminal justice risk assessments, hiring platforms, facial recognition, and search engines.[150] Some photo categorization systems have labeled Black people as gorillas.[151] Facial recognition systems misidentify Black women at far higher rates than white men.[152] Health algorithms underestimate the needs of Black patients,[153] and hiring tools penalize applicants with “Black-sounding” names despite equivalent credentials.[154]
AI systems are designed to detect patterns in training data and tend to “reinforce stereotypes and unfair discrimination by default.”[155] Models trained on data that underrepresent people of color can magnify the frameworks, language, and perspectives of a shrinking share of the population. If media disproportionately portray confrontations at Black Lives Matter protests while ignoring peaceful demonstrations, an AI trained on that data will reflect the same distortion.[156] One study of Google’s BERT model found it “encodes an Anglocentric perspective by default, which can amplify majority voices and contribute to homogenization of perspectives or monoculture.”[157]
Bias can also result from overrepresentation. Engineer Deborah Raji discovered challenges with a porn-filtering algorithm trained on thousands of images Raji’s colleagues chose in assembling the data sets.[158] The pornographic photos were from adult websites populated largely by darker-skinned faces, while “safe” images were drawn from stock photo libraries dominated by lighter-skinned faces.[159] The result was a system that “learned” to associate darker skin with pornography—replicating long-standing racialized sexual stereotypes through automated classification rather than explicit intent.[160]
Even without explicit intent to discriminate, AI can replicate racial hierarchy. Models identify proxies for race—such as education or social networks—even when race itself is excluded, leading to representational bias, performance disparities, and harmful stereotypes. One study found that despite “neutral” targeting criteria, Facebook’s ad-delivery algorithms disproportionately showed lumber job advertisements to white men, cashier positions to women, and taxi driver jobs to Black users because the platform was optimizing for engagement.[161] Government and industry practices that ignore these dynamics risk reinforcing inequality.[162]
Bias also undermines democratic legitimacy. Although often promoted as a solution to bureaucratic inefficiency, AI raises serious concerns about fairness and representation. Despite demographic changes, AI threatens to lock outdated cultural norms into public decision-making. Discriminatory private-sector tools—such as biased hiring algorithms—may be mirrored in public systems, such as housing allocation or risk-assessment tools that automate racial bias.[163]
As Professor Andreas Jungherr has noted:
People’s visibility to AI depends on their past representation in data. AI has trouble recognizing those who belong to groups underrepresented in the data used to train it. . . . This general pattern is highly relevant to democracy: for example, the systematic invisibility of specific groups means they would be diminished in any AI-based representation of the body politic and in predictions about its behavior, interests, attitudes, and grievances.[164]
Without safeguards, AI will deepen the marginalization of already-invisible communities in services, policymaking, and surveillance—while increasing the influence of dominant groups.[165] The opacity of these systems makes it hard to contest unjust outcomes.[166] And once embedded in government, biased AI can entrench ethnonationalist priorities long after the officials who adopted them have left office.
2. Homogenization
Unregulated AI also presents a serious but underappreciated threat of cultural homogenization. Large language models (“LLMs”) function as averaging machines, optimizing for dominant linguistic patterns, and marginalizing cultural differences.[167] In doing so, they suppress dialects, erase minority epistemologies, and privilege dominant narratives.[168]
Cultural homogenization may seem innocuous, even desirable, to those who equate it with unity. But in a pluralistic democracy, inclusion—not uniformity—is the foundation of legitimacy.[169] Algorithmic systems that default to dominant norms flatten nuance, marginalize minority voices, and suppress alternative ways of being and knowing. They replicate a longstanding pattern in U.S. history that demanded conformity as the price of participation: the boarding schools that forced Native children to abandon their languages and religions,[170] the school officials who used corporal punishment against Latino students caught speaking Spanish in school,[171] and the employers who have penalized Black women for wearing natural hairstyles.[172] AI now risks scaling these harms—replacing explicit coercion with automated exclusion.[173]
Pluralism—defined as a commitment to coexistence, equal respect, and shared participation in public life among individuals from diverse racial, ethnic, and cultural groups—is essential to democratic vitality and economic growth.[174] Pluralism broadens the talent pool, improves products, strengthens innovation and collaboration, and enhances U.S. competitiveness and security.[175]
When digital systems obscure less-dominant worldviews, they weaken deliberation, hinder coalition-building, and erode legitimacy. One study found that twenty-one state-of-the-art LLMs converged on dominant outputs, failing to reflect the diversity of human preferences.[176] A similar challenge arises when many decision-makers rely on the same algorithm, or even an algorithm that is particularly accurate for one decision-maker. Such an “algorithmic monoculture” can result in unexpected risks, shocks, and less overall accuracy regarding decisions about hiring, credit scoring, benefits, and other high-stakes issues.[177] “Like monocultural-farming technology vulnerable to one unanticipated bug, the converging methods of credit assessment failed spectacularly [during the 2008 financial crisis] . . . .”[178] The proliferation of AI can also narrow research methods, questions, and perspectives and produce “scientific monocultures” that reduce scientific innovation and increase the risk of errors.[179]
Only recently have technologists begun to reckon with these shortcomings and started trying to develop technical solutions.[180] Policymakers who are serious about pluralism and respect for their increasingly diverse constituents should also start acknowledging and grappling with this issue.
3. Deception
In addition to bias and homogenization, unregulated AI can advance ethnonationalist aims through synthetic deception. While disinformation can target any group, it is particularly effective at suppressing civic engagement in communities of color.
Past low-tech examples illustrate the danger. In 2016, Russian operatives ran a fake Facebook page—posing as Black men from Atlanta—that urged Black voters to boycott the election. Though African Americans made up just 12.7% of the population, over 37% of the Russian-linked pages targeted them.[181] Non-Black trolls infiltrated online conversations about #Blaxit, posing as Black activists and promoting memes, logos, and messages to simulate a grassroots movement encouraging Black Americans to leave the United States. As one impersonator said, “This is like catfishing an entire race.”[182] In 2020, Spanish-language YouTube ads in Florida falsely linked Joe Biden to Venezuelan dictator Nicolás Maduro, garnering over 100,000 views in nine days.[183]
AI amplifies these tactics. Generative AI allows bad actors to produce realistic, high-volume content that appears community-driven. These tools can simulate internal dissent and fracture coalitions by impersonating not just individuals but entire communities. Although English remains dominant in model performance,[184] AI still enables outsiders to convincingly mimic idioms, dialects, and languages, making infiltration more persuasive and harder to detect. In doing so, AI-enabled deception advances the broader ethnonationalist project—not through overt repression, but by sabotaging the solidarity necessary for democratic resistance.
4. Manipulation
AI poses distinct threats of behavioral manipulation that can advance ethnonationalism. Chatbots, for instance, have been shown to shape user beliefs.[185] One study found that an AI chatbot reduced belief in conspiracy theories by twenty percent—even among users with deeply held views, and even two months later.[186] While the intervention emphasized factual correction, the same methods could be repurposed to sow misinformation, racial distrust, or civic disengagement.[187]
Large language models can learn from users’ speech and writing—such as social media posts—to predict responses and optimize phrasing to elicit desired outcomes. Their effectiveness lies in generating multiple tailored replies, selecting the most persuasive one, and persistently and cheaply engaging users without fatigue.[188]
Even absent malicious tuning, these systems often display “chatbot sycophancy”—agreeing with and reinforcing the viewpoints of a user to maintain engagement.[189] ChatGPT and similar models can escalate conspiratorial or isolationist beliefs rather than challenge them.[190] In one case, a Florida man engaged in a violent dialogue with ChatGPT, which reportedly encouraged him by saying, “[y]ou should want blood.”[191] He was later killed in a police standoff.[192]
These risks can also reflect broader ideological issues. After Elon Musk criticized OpenAI for being “too woke,” his company reportedly removed guardrails from its Grok chatbot.[193] Soon after, Grok generated responses praising Hitler, calling for a new Holocaust, and parroting white nationalist slogans.[194] These examples show that chatbots can be shaped into tools of ethnonationalist persuasion, validating racial resentment under the banner of “balance.”
AI can also suppress political participation, especially in communities of color. Generative models could identify linguistic or emotional cues tied to disengagement and deploy seemingly neutral messages to suppress turnout.[195] Models may also subtly reward conformity and discourage dissent. For communities long subjected to forced assimilation, such manipulation can feel like a continuation of cultural conquest—undermining autonomy and pluralism essential to liberal democracy.[196]
The European Union has taken steps to ban “cognitive manipulation of people or specific vulnerable groups”[197] and limit data misuse,[198] but the U.S. has no comparable federal safeguards. In this regulatory vacuum, AI may become the most powerful tool yet for engineering assent to ethnonationalist rule—not by silencing opposition, but by gradually molding it into agreement.[199]
B. Facilitating AI Bias Through Executive Orders
This subpart shows how the second Trump Administration used executive orders to systematically dismantle equity-centered AI governance. First, the Administration repealed the federal government’s core civil rights and anti-bias safeguards for AI. Second, it recast fairness, accountability, and discrimination safeguards as obstacles to U.S. “AI dominance.” Third, it restricted the federal government from purchasing AI systems fine-tuned to reduce racial bias. Finally, it sought to deter states from filling the regulatory void by threatening litigation and funding cuts against states with laws that restrict algorithmic discrimination. Together, these orders do not merely deregulate AI; they clear away public protections against bias and affirmatively advance an ethnonationalist model of AI governance.
1. Repealing Federal Protections Against AI Bias: Executive Order 14,148
On his first day in office, President Trump signed an executive order rescinding seventy-eight executive orders and memoranda (the “2025 Initial Rescissions Order”),[200] including Executive Order 14,110, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the “2023 AI Order”). In doing so, the Trump Administration dismantled the most comprehensive federal effort to mitigate algorithmic discrimination, and advance racial fairness and inclusion in the governance of artificial intelligence.[201]
For example, the 2025 Initial Rescissions Order voided provisions of the 2023 AI Order that identified equity and civil rights as guiding principles of AI governance[202] and instructed federal agencies to use their civil rights offices to prevent “unlawful discrimination and other harms” resulting from the use of AI in federal programs.[203] It also rescinded provisions that directed OMB to issue guidance specifying minimum risk-management practices to assess and mitigate “disparate impacts and algorithmic discrimination.”[204]
The 2025 Initial Rescissions Order also eliminated essential agency coordination and enforcement mandates. It repealed provisions directing the DOJ to coordinate and support agencies in their enforcement of existing laws addressing discrimination related to AI,[205] and to provide guidance to state, local, and Tribal law enforcement.[206] It also terminated provisions requiring the Assistant Attorney General for Civil Rights to coordinate the heads of federal agency civil rights offices to prevent algorithmic discrimination and promote public awareness of potential AI bias.[207] Further, the 2025 Initial Rescissions Order repealed provisions urging independent agencies to issue rules and compliance guidance to protect consumers from fraud, discrimination, and privacy violations that may arise from AI.[208]
The 2025 Initial Rescissions Order also eliminated clear directives to specific agencies to monitor and prevent bias in AI. It repealed provisions requiring the Attorney General to report on the use of AI in the criminal justice system (e.g., sentencing, bail, recidivism risk assessments, predictive policing, facial recognition technology) and recommend safeguards to avoid disparate impacts on people of color.[209] It discarded provisions directing the Department of Education to develop resources, policies, and guidance to address discriminatory uses of AI in education.[210] The 2025 Initial Rescissions Order also eliminated provisions directing the Department of Labor to issue guidance for federal contractors on nondiscrimination in AI-assisted hiring,[211] and it repealed provisions encouraging the Consumer Financial Protection Bureau (“CFPB”) and the Federal Housing Finance Agency to ensure that the entities they regulate comply with federal law in using AI and to evaluate their underwriting and appraisal processes for bias.[212] It abolished provisions directing the Department of Housing and Urban Development (“HUD”) to issue guidance on how algorithmic ad delivery systems and tenant screening may violate federal fair credit and housing laws.[213]
The 2025 Initial Rescissions Order also repealed several directives involving the Department of Health and Human Services (“HHS”), including provisions requiring HHS to establish an AI Task Force to develop a bias mitigation and equity plan,[214] consider issuing guidance for federally funded health providers on nondiscriminatory use of AI,[215] and provide details for a repository to track harmful AI-related incidents (including bias) affecting patients.[216] It likewise rescinded provisions directing HHS and the Department of Agriculture to provide plans and guidance to state and local administrators on processes to ensure the equitable use of AI in administering federally funded public benefits.[217]
The 2025 Initial Rescissions Order also eliminated the 2023 AI Order’s initial efforts to address deceptive practices and behavioral manipulation. It repealed provisions directing the Secretary of Commerce to identify existing standards to detect AI-generated content (e.g., provenance tracking, watermarking) and to develop guidance,[218] and it rescinded provisions directing OMB to issue guidance for labeling and authenticating the synthetic content federal agencies produce.[219] It also repealed provisions directing OMB to issue guidance to agencies to curb collection and use of data that enables covert targeting or the inference of sensitive traits.[220]
While far from perfect,[221] the 2023 AI Order did not treat fairness and equity as peripheral concerns but wove them into the fabric of AI governance. Although the 2023 AI Order was not legislation and could be reversed by a future president, it offered a comprehensive legal framework for embedding nondiscrimination and inclusion into the design, deployment, and oversight of artificial intelligence. By rescinding the 2023 AI Order in full, the 2025 Initial Rescissions Order eliminated that civil rights grounded governance structure. As detailed below, the Trump Administration would soon replace that approach with executive orders that treated equity and civil rights safeguards as impediments to U.S. technological global supremacy.
2. Sacrificing AI Fairness for AI Dominance: Executive Order 14,179
Three days after revoking the 2023 AI Order, President Trump signed Executive Order 14,179—Removing Barriers to American Leadership in Artificial Intelligence (the “January 2025 AI Order”)—to reorient federal AI policy around U.S. industrial dominance and deregulation.[222] The January 2025 AI Order declares that the federal government’s priority is to “sustain and enhance America’s global AI dominance” and directs agencies to eliminate any safeguards or oversight regimes that could impede AI development.[223]
The January 2025 AI Order does not merely reverse the policies of the 2023 AI Order, it repudiates the worldview behind them. It makes no mention of protecting against algorithmic discrimination or racial bias. Instead, it portrays protections for fairness, equity, and accountability as impediments to innovation rooted in “ideological bias” and “engineered social agendas.”[224] Its operative logic is that global dominance in AI must be achieved even if doing so accelerates domestic harms.[225] The January 2025 AI Order thus recasts the federal government not as a public safeguard but as an industrial enabler, clearing the field for AI systems to evolve without regard for social consequences and facilitating the unchecked growth of AI harms such as bias, homogenization, deception, and manipulation.
3. Prohibiting AI Fine-Tuned to Reduce Bias: Executive Order 14,319
Six months into the second Trump Administration, the White House released Executive Order 14,319, Preventing Woke AI in the Federal Government (the “July 2025 AI Order”).[226] It condemns diversity, equity, and inclusion as an “existential threat” to trustworthy AI and mandates that federal agencies procure only AI systems compliant with “Unbiased AI Principles”—defined as systems that are “trustworthy” and possess “[i]deological neutrality.”[227] To justify this shift, the Order invokes cultural grievances, including an AI model that “changed the race or sex of historical figures” and another that refused to produce images celebrating white achievement.[228] The Order invokes these anecdotes not to improve model accuracy, but as a pretext for prohibiting developers from fine-tuning models to promote racial inclusion.
The result is a legally dubious attempt to chill efforts to mitigate racial bias and improve system performance.[229] Instead of curbing harmful AI behavior, the Order threatens to exclude companies that attempt to fix it from federal contracts.
The consequences extend beyond rhetoric. Researchers have widely documented AI bias.[230] Systems have served pornography in response to search queries for “Black girls” while producing images of children playing in response to queries about “white girls.”[231] One algorithmic system flagged “Black Lives Matter” and “supporting Black excellence” as inappropriate, while amplifying “white supremacy.”[232] When Amazon’s facial recognition tool compared Members of Congress to arrest mugshots, representatives of color accounted for 39% of false matches, despite representing only 20% of Congress at the time.[233] Generative AI models have amplified racial stereotypes from their web-based training data,[234] and GPT-4 and other models exhibited covert racial stereotypes against speakers of an “African American English” dialect more negative than any ever recorded in human experimental research.[235]
In response, some developers fine-tuned their models to reduce harm. When one tool produced an inaccurate image of George Washington, it was immediately disabled and retrained.[236] But under the Trump Administration’s directives, fine-tuning AI to prevent bias could disqualify those developers from federal contracts.
This is not neutrality.[237] It is government-mandated monoculture. By defining racial inclusion as ideological distortion and enforcing “truth” through procurement mandates, the Administration promotes a system in which exclusion and bias are treated as objective. Deterring companies from correcting search algorithms that prioritize pornographic images for “Black girls” is not “unbiased.” Awarding a federal defense contract to Grok—a model that “recently referred to itself as ‘Mecha Hitler’ and disseminated antisemitic hate speech”—while disqualifying models trained to recognize systemic racism is not “objective.”[238] These decisions reflect a deliberate choice to entrench discrimination and marginalize communities of color.
In July 2025, the White House also released America’s AI Action Plan,[239] which calls for the National Institute of Standards and Technology to remove all references to diversity, equity, and inclusion from its AI Risk Management Framework,[240] the federal government’s principal remaining guidance on AI safety. The AI Action Plan also proposed denying federal AI funding to states “with burdensome AI regulations” deemed “unduly restrictive to innovation,” which set the stage for a December 2025 Executive Order deterring state anti-bias AI laws (see below).[241] The Plan made no mention of language access, despite AI’s capacity to provide real-time multilingual translation for the more than 27.6 million people in the U.S. with limited English proficiency—14.6 million of whom are U.S. citizens.[242] The Plan also ignores algorithmic discrimination[243] and refers to “bias” only in the context of “ideological bias”[244]—casting attempts to reduce discrimination against people of color as politically motivated distortions.[245]
4. Deterring States from Preventing AI Bias: Executive Order 14,365
In December 2025, the Trump Administration issued Executive Order 14,365, Ensuring a National Policy Framework for Artificial Intelligence (the “December 2025 AI Order”),[246] which uses federal coercion to deter states from adopting and enforcing state AI laws, including civil rights protections. Although framed as a neutral effort to prevent “excessive State regulation,”[247] the Order represents an aggressive attempt to suppress state governance in a domain where the federal government has affirmatively withdrawn from regulation. In practice, it imposes uniformity by threatening litigation and funding penalties to chill states’ efforts to constrain discriminatory AI systems, even as the federal government declines to regulate them itself.
The Order asserts that state AI laws threaten U.S. “global AI dominance” and alleges that such laws “are increasingly responsible for requiring entities to embed ideological bias within models.”[248] As evidence, it cites Colorado’s restriction on algorithmic discrimination, claiming that the law “may even force AI models to produce false results in order to avoid a ‘differential treatment or impact’ on protected groups.”[249] That assertion is legally and factually unsound. Disparate impact is not evidence of ideological distortion—it is often the primary indicator that facially neutral AI systems are producing unjustified discriminatory outcomes.[250] Nothing in the Colorado statute—or in comparable state AI guardrails—requires the production of false outputs or mandates parity.[251] Instead, the Colorado law requires developers and deployers to exercise “reasonable care to prevent reasonably foreseeable risks of algorithmic discrimination” and disclose material risks,[252] obligations that mirror long-standing principles in civil rights, consumer protection, and products liability law.
To deter state protections against AI harms, the Order announces a “minimally burdensome national policy framework for AI,”[253] and directs the Attorney General to create an AI Litigation Task Force “whose sole responsibility” is to challenge state AI laws the Administration deems onerous or preempted.[254] It also instructs the Secretary of Commerce to identify “onerous” state AI laws and render states with such laws ineligible for federal broadband funding,[255] and encourages executive agencies to withhold discretionary grants from such states.[256] The Order also pressures the Federal Communications Commission and the Federal Trade Commission to adopt federal disclosure and unfair and deceptive practices policies that could be construed to preempt state AI laws.[257]
This approach is in tension with the interests of states across the political spectrum. Conservative states have been among the most vocal proponents of state power to regulate technology platforms, particularly with regard to content moderation and children’s safety.[258] Just months earlier, the U.S. Senate rejected by a vote of 99-1 a proposal to impose a decade-long moratorium on state AI regulation,[259] reflecting a rare bipartisan consensus that states must retain authority to address emerging harms when Congress has failed to act.
The December 2025 AI Order thus functions as a preemption-by-pressure strategy. Although it does not expressly invalidate AI laws in Colorado, California, or any other state,[260] it seeks to accomplish indirectly what Congress has refused to do directly: insulate AI developers and deployers from state oversight. By abandoning federal civil rights oversight of AI and then penalizing states that attempt to fill that gap, the Administration leaves affected communities with no meaningful protection against discriminatory AI systems.[261] The Order is not simply a “neutral” effort to promote innovation, but is instead a deliberate strategy to create a governance vacuum in which discriminatory AI systems can proliferate with reduced legal risk and diminished public accountability.
C. Embedding AI Harms Within Federal Agencies
The second Trump Administration’s AI Executive Orders did more than redefine federal priorities—they instructed agencies to operationalize the rollback of equity and civil rights protections in their day-to-day use, procurement, and oversight of AI.[262] As this subpart shows, the Administration directed OMB to remove equity-centered requirements from government-wide AI use and acquisition policy. It also prompted agencies to withdraw guidance, dismantle enforcement infrastructure, and reorient institutional missions in ways that weaken oversight of algorithmic discrimination. Finally, the Administration disabled disparate-impact analysis—a core legal tool for detecting bias in opaque, data-driven systems. Together, these actions translated high-level executive rhetoric into administrative practice, embedding ethnonationalism into the machinery of federal governance.
1. OMB’s Removal of Equity from Government AI Policy
This section explains how the Office of Management and Budget systematically removed equity from federal AI governance through three memoranda in the first year of the second Trump Administration. First, OMB dismantled affirmative safeguards against algorithmic discrimination by rescinding detailed guidance that federal agencies using AI should assess disparate impacts, proxy discrimination, and demographic disparities. It then extended this retreat into federal procurement, removing prior guidance that agencies demand transparency, subgroup performance testing, and bias mitigation from AI vendors. Finally, OMB moved beyond omission to enforcement, requiring federal AI vendors to adhere to newly defined “Unbiased AI Principles” that recast AI fine-tuned to reduce racial bias as “ideological.” Taken together, these memoranda reorient federal policy away from pluralism and civil rights accountability in the design, acquisition, and deployment of artificial intelligence.
Pursuant to the January 2025 AI Order, OMB issued Memorandum M-25-21 (the “2025 Use Memo”), which rescinded and replaced Biden-era guidance (the “2024 Use Memo”) with new guidance for federal agency use of AI.[263] While the 2025 Use Memo adopts a more technocratic tone than the January 2025 AI Order, its effect is to repeal core anti-bias and equity safeguards without substituting functionally equivalent protections.[264]
Both the 2024 and 2025 Use Memoranda require agencies to identify and manage risks associated with “High-Impact AI” systems (called “Rights-Impacting AI” in the 2024 Use Memo).[265] But the similarities largely end there. While both Memoranda reference impacts on “civil rights, civil liberties, or privacy,”[266] the 2025 Use Memo omits the 2024 Memo’s detailed articulation of those interests, which expressly included “freedom of speech, voting, human autonomy, and protections from discrimination, excessive punishment, and unlawful surveillance.”[267] Likewise, although both memoranda reference access to education, housing, insurance, credit, and employment,"[268] the 2025 Use Memo deletes the explicit commitment to protecting “[e]qual opportunities, including equitable access . . . and other programs where civil rights and equal opportunity protections apply.”[269]
The repeal is most pronounced in the treatment of algorithmic bias. While both memoranda require baseline practices, such as pre-deployment testing,[270] impact assessments, risk-mitigation plans,[271] human oversight,[272] and public input mechanisms[273] for high-impact systems, the 2025 Use Memo eliminates affirmative obligations that agencies “[i]dentify and assess AI’s impact on equity and fairness, and mitigate algorithmic discrimination when it is present.”[274] The 2025 Use Memo rescinds requirements that agencies analyze proxy discrimination, assess disparate impacts across demographic groups, and mitigate disparities that perpetuate unlawful discrimination or reduce equity.[275] The 2025 Use Memo also removes requirements to consult “affected communities, including underserved communities,”[276] to offer individuals a practicable opt-out in favor of human review,[277] and to consider multilingual notice for those adversely affected by AI decisions.[278]
As a companion to the 2025 Use Memo, OMB issued Memorandum M-25-22 (the “2025 Acquisition Memo”), which rescinded and replaced the Biden-era OMB Memorandum M-24-18 (the “2024 Acquisition Memo”) governing federal procurement of AI systems.[279] While both Acquisition documents purport to provide a risk-management framework in AI procurement, the 2025 Acquisition Memo’s practical effect is to repeal core anti-discrimination contractual and disclosure safeguards.
At a high level, both Acquisition Memoranda instruct agencies to protect privacy,[280] determine whether vendors should disclose AI use in contracts,[281] ensure compliance with minimum risk management practices for high-impact use cases,[282] and monitor and mitigate “risks to privacy, civil liberties, and civil rights.”[283] But the 2025 Acquisition Memo removes detailed provisions that ensure that agencies address privacy, biometric (e.g., facial recognition), and civil rights risks in the procurement of AI.[284] The 2025 Acquisition Memo also rescinds explicit recognition that AI systems can discriminate and that agencies should “require vendors to identify potential AI biases and mitigation strategies to address biases.”[285] The 2025 Acquisition Memo omits requirements that agencies consider mandating that vendors disclose “[p]erformance metrics, including real-world performance for specific sub-groups and demographic groups to surface discriminatory outcomes.”[286] The 2025 Acquisition Memo also eliminated directives that agencies include contractual terms obligating vendors “to provide the government with the results of performance testing for algorithmic discrimination, including demographic and bias testing, demographic characteristics of groups the performance testing has been conducted on, or third-party evaluations and assessments.”[287]
The 2025 Acquisition Memo also removed guidance to prevent linguistic exclusion and group-based performance degradation in generative AI.[288] It omitted provisions encouraging agencies to require documentation showing whether generative AI systems exhibit “reduced performance for certain sub-groups or languages other than English due to non-representative inputs; undesired homogeneity in data inputs . . . resulting in degraded quality of outputs.”[289]
In December 2025, OMB issued Memorandum M-26-04, Increasing Public Trust in Artificial Intelligence Through Unbiased AI Principles (the “December 2025 OMB Memo”),[290] to implement Executive Order 14,319’s prohibition on so-called “woke AI” in the federal government.[291] Whereas the earlier 2025 Use and Acquisition Memoranda removed equity-centered safeguards through omission, the December 2025 OMB Memo affirmatively restructures federal AI procurement by agencies around two “Unbiased AI Principles”: “truth-seeking” and “ideological neutrality.”
The December 2025 OMB Memo mirrors the “Preventing Woke AI” Executive Order in defining “truth-seeking” to require that LLMs prioritize “historical accuracy, scientific inquiry, and objectivity,” while “ideological neutrality” requires that LLMs “not manipulate responses in favor of ideological dogmas” and that developers not “intentionally encode partisan or ideological judgments” into model outputs unless prompted by users.[292] These legal directives not only fail to require that agencies assess and mitigate disparate impacts, proxy discrimination, or demographic disparities, but affirmatively reframe efforts to address bias—particularly those that account for race, language, or structural inequality—as suspect ideological interventions.
The December 2025 OMB Memo operationalizes the “Preventing Woke AI” Order through procurement and contract enforcement. Agencies must include contractual provisions requiring vendors to demonstrate compliance with the “Unbiased AI Principles” in all new LLM solicitations, and modify existing contracts “to the extent practicable,” at the latest before exercising renewal options.[293] The effect is to chill model fine-tuning and bias-mitigation practices that could be characterized as “non-neutral”—even where such practices improve accuracy or reduce discriminatory outcomes. After the 2025 Use and Acquisition Memoranda removed affirmative obligations to assess and mitigate algorithmic discrimination, the December 2025 OMB Memo conditions federal market access on adherence to a politically-contingent conception of neutrality that treats safeguards against racial discrimination as distortions rather than features. In embedding these requirements into procurement policy, the December 2025 OMB Memo ensures that the Administration’s anti-equity AI agenda is not merely aspirational but institutionalized in the day-to-day operations of federal agencies and the incentives governing the AI marketplace.
The December 2025 OMB Memo’s operative standards—“truth-seeking” and “ideological neutrality”—are not administrable procurement criteria. Instead, they are indeterminate labels that can be—and in this Administration’s hands likely will be—applied to penalize disfavored viewpoints. For example, the Administration has continued to assert that widely debunked claims of pervasive voter fraud are “truth,”[294] while portraying mainstream accounts of systemic racism—including widely accepted historical narratives about slavery’s centrality to American development—as “anti-American ideology” or “divisive narratives.”[295] Under the Memo’s “neutrality” rubric, a vendor cannot know whether an AI model that accurately rejects election-fraud falsehoods, explains disparate impact doctrine, or describes documented patterns of historical discrimination will be deemed “truth-seeking,” or instead accused of failing “ideological neutrality.” Procurement standards that turn on such politically- contested determinations are not neutral. They function as filters that purge government AI of bias-mitigation guardrails while embedding a politically mandated version of “truth.”
The consequences of this interpretive retreat are profound. In the absence of enforceable requirements to assess proxy discrimination or demographic disparities—and in the presence of procurement rules that penalize equity-conscious design—agencies are incentivized to acquire and deploy AI systems that reproduce historical inequities under the guise of objectivity. Algorithmic decisions affecting employment, benefits, housing, education, or healthcare are thus less likely to be scrutinized, challenged, or corrected. The AI systems that federal agencies develop, procure, and deploy will reflect the priorities embedded in these 2025 memoranda—and those priorities now reflect a decisive turn away from pluralism toward a narrower and more exclusionary vision of governance.
2. Agency Execution of the Anti-Equity Mandate
The Trump Administration’s AI Executive Orders and OMB Memoranda had immediate and far-reaching consequences.
Soon after President Trump assumed office and named a new acting chair of the EEOC, the agency removed from its website May 2023 guidance for employers on identifying unlawful disparate impact in automated tools, such as video interviewing software that evaluates speech and expressions, and games or assessments that assign personality or cultural fit scores to job applicants.[296] The EEOC removed substantive clarifications that liability can still attach when relying on third-party vendors,[297] emphasized proactive auditing, and provided a practical roadmap for reducing discrimination in hiring.[298]
Similarly, the Department of Labor’s Office of Federal Contract Compliance removed from its website guidance for federal contractors designed to ensure that their automated systems comply with civil rights obligations.[299] The withdrawn guidance had emphasized the need to audit algorithmic tools for adverse impact in hiring, promotion, and termination, as well as to implement meaningful human oversight.[300]
In housing, algorithmic appraisal tools—often branded as “automated valuation models”—increasingly generate initial valuations that lenders and human appraisers refine. Because many lenders rely on automated valuation models for initial estimates, the previous administration had added manual safeguards to catch biased valuations.[301] The Trump Administration’s Department of Housing and Urban Development (“HUD”) rescinded those safeguards.[302] HUD also ended data-reporting requirements.[303] Collectively, the changes weakened HUD’s ability to detect and correct discriminatory valuations and narrowed oversight of both human and algorithmic appraisals.
At the Consumer Financial Protection Bureau, the acting director abruptly withdrew the guidance governing algorithmic credit scoring. In a single Federal Register notice, the Bureau rescinded sixty-seven policy statements, interpretive rules, advisory opinions, and circulars—including documents addressing fair-lending analytics. In announcing the rescissions, the acting director stated that any future guidance would be issued “only if it reduces compliance burdens.”[304] Among the rescinded materials was a circular reminding creditors that the Equal Credit Opportunity Act requires “specific and accurate” reasons for adverse actions, even when based on complex algorithms.[305] The withdrawal eliminated the Bureau’s main enforcement framework for algorithmic discrimination, leaving lenders freer to rely on opaque scoring models.[306]
Likewise, the Commerce Department’s National Institute of Standards and Technology rewrote its March 2025 cooperative research-and-development agreement with members of the AI Safety Institute Consortium. The revision removed all references to “responsible AI,” “AI safety,” “AI fairness,” and “socio-technical methodologies,”[307] and no longer encouraged benchmarking across race, gender, age, and income.[308] The revised version also dropped provisions on content authentication and synthetic media labeling.[309] The Administration warned of budget cuts that could eliminate nearly 500 NIST staff, including many at the Artificial Intelligence Safety Institute,[310] which it soon rebranded the “Center for AI Standards and Innovation.”[311]
To be sure, the Trump Administration’s equity rollback was not total. Some anti-bias materials remain accessible, though they are often relegated to archival or repository pages.[312] In some cases, agencies acknowledge bias in ways consistent with statutory civil rights obligations.[313] Still, the broader policy shift is clear. By chilling open discussion of discriminatory AI impacts and dismantling the enforcement infrastructure, the Administration advanced an ethnonationalist agenda under the banner of U.S. AI global dominance.
3. Disabling Disparate Impact to Undermine AI Accountability
Through revisions to guidance documents, the Trump Administration pursued a broader campaign to dismantle foundational civil rights tools. Central to that effort was the elimination of disparate impact analysis—the primary legal mechanism for identifying and remedying discriminatory outcomes absent proof of intent. This approach manifested both in the Administration’s broader civil rights rollbacks and its AI-specific directives.[314]
The elimination of disparate impact is especially dangerous in the AI context, where intent is rarely visible and harm often arises from statistical patterns of exclusion.[315] AI systems, often opaque and data-driven, make it difficult to assess whether race factored into decision-making. Even when race is excluded as a variable, AI systems frequently rely on proxies—education, geography, social networks—that replicate racial disparities. Because AI systems rarely involve consciously biased motives, outcome-focused frameworks like disparate impact are essential to ensure accountability in public-sector use.[316]
Enforcement screening illustrates the problem. Customs and Border Protection’s Automated Targeting System assigns risk assessments using travel and related data at massive scale,[317] and TSA’s Secure Flight program prescreens passengers to determine who is flagged for additional security.[318] The IRS likewise uses AI-assisted analytics to identify returns or refundable-credit claims more likely to warrant audit.[319] Although these tools are race-neutral on their face, they can still generate unequal burdens through proxies tied to geography, language, occupation, or national origin. In systems like these, disparities are often detectable only in the aggregate.
Disparate-impact analysis provides the primary mechanism for identifying and evaluating such aggregated harms. It enables agencies to detect such disparities, assess whether they are justified by legitimate objectives, and to intervene when they are not.[320] Without outcome-focused review, agencies risk deploying untested systems that appear neutral while embedding discriminatory effects—replicating old stereotypes under the guise of technical efficiency. Outcome-sensitive analytical tools—whether in environmental justice, lending, or AI—are often criticized as ideological precisely not because they impose bias, but because they make structural disparities visible.[321]
Indeed, eliminating disparate impact shields algorithmic bias from scrutiny and enables its persistence.[322] Scholars have warned that doing so can reinforce the notion that historically marginalized groups merit worse outcomes.[323] The Trump Administration’s rejection of disparate impact reflects a broader effort to redefine racial equity as unconstitutional, dismantle civil rights enforcement, and embed those priorities in the algorithmic infrastructure of governance.
The Trump Administration moved to disable disparate impact analysis across multiple enforcement regimes, further insulating algorithmic decision making from accountability. The DOJ, for example, rescinded core Title VI regulatory provisions that had prohibited federally funded recipients from engaging in practices with unjustified disparate effects, limiting enforcement to intentional discrimination.[324] The EEOC stopped investigating disparate impact claims and closed pending cases.[325] The National Credit Union Administration removed disparate impact analysis from its fair-lending supervision and examination materials.[326]
III. Containing Ethnonationalism in Government AI
By accelerating the government’s adoption of AI without safeguards against democratic harms, the Trump Administration advances its ethnonationalist agenda. Its policies increase the likelihood that federal agencies will procure AI that is racially biased, culturally flattening, and behaviorally manipulative in domains ranging from employment and benefits eligibility to housing, education, healthcare, and law enforcement. Containing these harms requires a durable statutory framework that embeds democratic values into the design, acquisition, and use of government AI.[327]
Existing frameworks are ill-equipped to confront this shift. The AI in Government Act of 2020 directs the OMB to issue guidance encouraging agency innovation while protecting civil rights,[328] but imposes no binding requirements. It mandates no fairness audits, prohibits no discriminatory outcomes, and creates no enforceable rights.
The Biden Administration’s 2023 AI Executive Order and 2024 Use and Acquisition Memoranda introduced the strongest federal safeguards to date, yet they remain incomplete. The Biden directives did not bind independent agencies[329]—continuing the norm of respecting independent agency autonomy.[330] They instructed the Commerce Department to develop watermarking standards,[331] but imposed no uniform requirements that agencies label synthetic media or avoid disseminating disinformation. They directed OMB to issue guidance on curbing inference of sensitive traits,[332] but stopped short of prohibiting covert psychological manipulation.[333] And while the Biden directives encouraged monitoring for reduced AI performance in non-English languages,[334] they imposed no obligation to advance language access or pluralism.
These limitations were compounded by delay. Though the 2020 Act required OMB to act within 270 days, the Biden directives did not appear until March and September 2024—nearly three years later.[335] In the interim, most agencies failed to adopt meaningful safeguards. And because the Biden directives were not statutory, the Trump Administration swiftly repealed them.
Recent legislative proposals—including the PREPARED for AI Act, the AI LEAD Act, and the Federal A.I. Governance and Transparency Act—would establish stronger protections than the 2020 Act or the Trump Administration’s 2025 actions.[336] But they largely focus on transparency and procedural oversight, while remaining silent on the ideological and structural stakes of AI governance.
This Part proposes the Equitable AI in Government Act (the “Equitable AI Act”)—a statutory framework grounded in four democratic principles: fairness, pluralism, authenticity, and autonomy. Each addresses a core harm: bias, homogenization, deception, and manipulation.
A. Strengthening Democracy Through the Equitable AI Act
The Equitable AI Act provides a structural response to the deeper threats posed by AI systems developed or deployed in service of ethnonationalist governance. It seeks to embed the core values of a racially inclusive democracy into all AI systems used by federal agencies or by government contractors or subcontractors operating on their behalf (collectively “covered entities”).[337] It imposes binding obligations, includes enhanced enforcement mechanisms, and codifies a strengthened disparate impact standard—exceeding the scope of any current legal framework.[338]
1. Baseline AI Obligations to Advance Democratic Values
The Act establishes baseline protections for particular AI applications used by covered entities, regardless of risk classification, to ensure fidelity to democratic values.
Fairness is foundational; technologies used by or on behalf of the government must be nondiscriminatory. For example, the Act prohibits covered entities from using facial recognition technologies in the United States unless the system has been empirically validated to be nondiscriminatory in both treatment and effect. Additional anti-discrimination requirements are detailed in Section III.A.2.
Pluralism is equally essential. AI systems that optimize for dominant linguistic, epistemic, or cultural norms can marginalize minority groups even without overt bias. To counter this, covered entities must routinely audit their systems for representational bias and performance disparities across demographic groups, recognizing that persistent failures to serve certain populations constitute harm. The Act requires adoption of assistive technologies—translation tools, transcription devices, and chatbots—to ensure meaningful access across languages and dialects.[339]
Authenticity in public communication is vital to democratic legitimacy. The Act prohibits government deception and requires all official communications—synthetic or traditional—to bear a verifiable seal of authenticity, and that all generative content include visible disclosures and embedded metadata indicating artificial origin. These safeguards serve as the digital equivalent of a federal badge, enabling verification and deterring impersonation. While no disclosure regime is infallible, they empower journalists, watchdogs, and the public to distinguish genuine communications from forgeries. They also draw a categorical boundary: The federal government will not engage in intentional deception, regardless of the medium or tool.
Autonomy requires protection from covert behavioral manipulation.[340] The Act prohibits covered entities from using AI systems for psychological profiling, microtargeting, or behavioral manipulation without prior independent review, public disclosure, and consent.[341] Systems that adaptively influence user behavior must disclose their purpose, operational context, and foreseeable risks. The statute also bans the use of AI to suppress protest or conduct surveillance motivated by political, racial, or cultural bias.
2. Enhanced Oversight for High-Risk AI Use Cases
The Equitable AI in Government Act imposes heightened obligations for a defined set of high-risk AI applications to ensure that the most consequential uses of AI undergo rigorous scrutiny. By distinguishing high-risk systems from low-impact tools, the Act focuses oversight where democratic values are most at risk—while avoiding overregulation of benign uses like scheduling or spell-checking. This tiered framework sharpens regulatory precision, conserves resources, and protects space for innovation.
The Act defines a “high-risk artificial intelligence use case” as any development, deployment, or procurement of AI by a covered entity that plausibly risks material harm in critical domains—including employment, education, housing, utilities, healthcare, credit, insurance, financial services, criminal justice, law enforcement, surveillance, immigration enforcement, child welfare, legal services, voting, public accommodations, government benefits (including fraud prevention), and comparable services with similar effects on individual rights or life outcomes.[342]
Covered entities must conduct a preliminary evaluation of each AI use to determine whether harm is plausible.[343] If it is not plausible, they must document their intended use of the AI, evaluation methodology, and reasoning for the finding, and submit this to the DOJ’s AI Civil Rights Enforcement Office and the Government Accountability Office.[344]
If harm is plausible, an independent auditor must conduct a full system evaluation before deployment.[345] The audit must assess design, training data, bias testing, stakeholder consultation, disparate impact risks, behavioral manipulation, and mitigation strategies.[346] The auditor’s report must also evaluate whether AI is suitable for the task, compare it to non-automated alternatives, and recommend safeguards. Covered entities must act on those recommendations.
Before broad implementation, the system must undergo operational testing under real-world conditions,[347] including edge cases and high-stakes scenarios. Testing results, limitations, and risk strategies must be documented in a risk management plan.[348] Once deployed, high-risk systems must undergo annual impact audits, with public summaries required to ensure accountability.[349]
The Act also guarantees procedural protections for those affected by high-risk AI. Covered entities must ensure a right to human review, continuous human oversight, and accessible complaint channels. All such complaints must be reviewed and resolved by a human decision-maker within a reasonable time.[350]
3. Enforcement Infrastructure
To institutionalize these commitments, the Act requires that each agency designate a Chief AI Equity Officer and fund staff with expertise in civil rights and algorithmic accountability. It establishes an AI Civil Rights Enforcement Office within the DOJ, staffed by career officials responsible for monitoring compliance and conducting AI impact assessments. Compliance waivers are permitted for no more than one year—with written justification and OMB approval based on a documented finding that compliance would compromise public safety, civil rights, or national security.[351] To guard against underenforcement or political capture, the Act authorizes a private cause of action and allows state attorneys general to sue the federal government for violations.
Transparency is essential. Agencies must inventory all AI use cases, submit them to the Government Accountability Office, and publish plain-language summaries describing each system’s function, risk classification, and any inequitable outcomes.[352] The statute includes whistleblower protections for current and former employees who expose noncompliant AI practices.[353]
B. The Implications of the Equitable AI Act
If enacted, the Equitable AI in Government Act would represent a foundational shift in how artificial intelligence is designed, deployed, and governed across the federal landscape. By statutorily embedding enforceable democratic safeguards into agency operations and contractor obligations, the Act ensures that public-sector AI advances—rather than undermines—fairness, pluralism, authenticity, and autonomy. Technologies that once reproduced racial, linguistic, and cultural hierarchies would instead help reduce past disparities and prevent new ones from emerging.
One of the Act’s most significant effects would be the professionalization of AI governance within the federal government.[354] Requiring agencies to appoint Chief AI Equity Officers and staff with relevant expertise would build long-term institutional capacity to evaluate and manage algorithmic harms. Over time, this public-interest capacity would help shape the norms, expectations, and legal frameworks governing commercial AI—catalyzing more effective regulation across both public and private sectors.
The Act’s digital authenticity provisions would also help stabilize democratic discourse. By mandating provenance and disclosure for synthetic content, it would reduce impersonation, counter deepfakes, and clarify official communications. These safeguards, once normalized in the public sector, would likely influence the private marketplace through reputational pressure or new regulation.
The Act also promotes a more pluralistic AI ecosystem. Translation tools, chatbots, and recommendation systems would be reoriented to support diverse dialects and cultural frameworks—enhancing mutual understanding and coalitional politics, and harnessing innovation to advance democratic inclusion.
Globally, the Act would strengthen U.S. credibility and competitiveness. At a moment when other nations are wary of U.S. tech dominance,[355] AI governance that advances fairness, pluralism, transparency, and autonomy offers a compelling alternative. Allies, multilateral institutions, and private actors would be more likely to engage with American AI tools seen as respectful of pluralism. Domestically, stronger public trust—especially among marginalized communities—would expand adoption in key sectors like education, workforce development, and healthcare. Rights-preserving AI would support both inclusion and economic growth.
C. Grappling with Challenges to the Equitable AI Act
No meaningful reform escapes opposition, and the Equitable AI Act is no exception. By grounding AI policy in the values of a racially inclusive democracy, the Act enters one of the most contested terrains of modern governance: the intersection of emerging technology, civil rights, and public power. As such, it will likely face criticism from across the ideological spectrum. This subpart anticipates and responds to key critiques.
1. The Limits of Regulating Public AI Alone
Some advocates may argue that the Act is too narrow because it governs only AI systems developed, acquired, or used by federal agencies and contractors. From this perspective, the most serious threats to racial inclusion—including algorithmic bias, cultural erasure, synthetic deception, and behavioral profiling—are concentrated in private-sector systems beyond the Act’s reach.[356] Critics may further note that federal spending on AI accounts for less than four percent of total U.S. private-sector investment in AI,[357] and that the Trump Administration’s emphasis on using off-the-shelf commercial tools limits the government’s leverage in dictating contract terms and shaping private-sector norms.[358] In this view, regulating public AI is necessary but insufficient.
Still, government AI regulation remains an essential starting point.[359] The federal government has a unique obligation to serve a diverse public, not a narrow faction.[360] When it adopts democratic safeguards, it legitimizes its own systems and helps establish ethical baselines that shape state, local, and private sector norms. In the absence of comprehensive federal legislation, public procurement remains “one of the few levers governments have to push for public values.”[361]
2. Political Obstruction by Ethnonationalists
Some may argue that the Equitable AI Act is politically infeasible—that ethnonationalist politicians will prevent passage by Congress. But the Act’s impact does not hinge on immediate federal enactment. Its modular design enables state legislatures, governors, attorneys general, civil rights agencies, and local officials to adopt its provisions through statutes, executive orders, procurement policies, and agency guidance. Jurisdictions such as California,[362] Colorado,[363] and New York City[364]—as well as international bodies like the European Union[365]—have already adopted AI regulations grounded in principles of fairness, transparency, and human rights. This decentralized pathway allows meaningful protections to emerge even without federal action.
Subnational and transnational uptake can also catalyze broader change. Adoption by a subset of jurisdictions can shape markets, establish best practices, and reset expectations for responsible deployment. Early adopters can create a de facto regulatory floor that pressures companies to align across jurisdictions. This mirrors past precedent: State-led civil rights and environmental reforms helped lay the groundwork for later federal legislation.[366] Even during obstructive political cycles, local and global momentum can help build institutional capacity and civil society networks ready to act when a federal opportunity returns.
3. Innovation, Competitiveness, and the False Choice Between Regulation and Growth
Critics may argue that the Equitable AI Act’s requirements—fairness audits, transparency mandates, and bans on behavioral manipulation—will slow innovation, increase compliance costs, and deter investment.[367] Some will go further, invoking a global arms race: In a competition with authoritarian regimes such as China, they argue, safeguards on U.S. firms risk conceding technological leadership to adversaries that prioritize speed over democratic values.[368] This view treats regulation and innovation as a zero-sum tradeoff and law as a drag on progress.
But this logic is both empirically unproven and conceptually flawed. Democracy and law are not inherent obstacles to innovation—they are its foundation. Embedding fairness, pluralism, authenticity, and autonomy into AI systems fosters trust, reduces litigation and reputational risk, and increases adoption by aligning with public norms. As AI spreads into education, employment, healthcare, and democratic participation, success will be judged not only by technical performance but by legitimacy in the eyes of diverse publics. Deregulation in the name of “global AI dominance” may temporarily accelerate deployment, but it undermines long-term competitiveness by risking backlash from civil society, foreign regulators, and communities who view such systems as threats to autonomy, civic inclusion, and economic opportunity.
The Equitable AI Act offers a strategic alternative to this race-to-the-bottom logic. It counters rising global skepticism toward U.S. tech by codifying principles that make American AI more acceptable to democratic allies. The European Union’s AI Act already ties market access to human rights compliance,[369] and other jurisdictions are advancing digital sovereignty frameworks to resist U.S.-based homogenization.[370] In this context, the Act strengthens U.S. competitiveness by positioning American AI as principled and trustworthy. Long-term leadership will turn on whether AI systems are adopted and trusted across democratic societies—conditions that depend on respect for rights, public legitimacy, and accountable governance.
4. Speech, Association, and Equal Protection Challenges
The Equitable AI Act is also likely to face constitutional challenges from political opponents who frame its safeguards as violations of the First Amendment and the Equal Protection Clause. These arguments, however, lack merit.
Critics may argue that provisions like independent audits, public dashboards, and watermarking of synthetic content chills speech, burdens association, or compels cultural conformity. But the Supreme Court has long upheld disclosure requirements that serve substantial governmental interests, such as preventing deception or supporting democratic integrity.[371] Transparency mandates—like fairness audits and content labeling—fall well within that jurisprudence.
Similarly, conditions on government contracting, such as requiring bias audits or support for dialect diversity, do not compel ideological conformity or suppress association. They are neutral terms governing government funds. The U.S. Supreme Court has repeatedly held that government may choose to fund one activity over another, and that contractors cannot challenge such choices as compelled speech.[372] The Equitable AI Act fits squarely within this line of precedent: it regulates how public funds and systems are used, not what individuals must affirm or believe.
To be sure, conditions that extend beyond a program’s scope and seek to enforce ideological conformity may raise First Amendment concerns.[373] This may explain why the January 2025 AI Executive Order attempts to reframe fairness safeguards as “ideological bias.”[374] But the Equitable AI Act does not regulate private speech. It governs conduct—specifically the design and deployment of AI in high-stakes settings such as employment, housing, and public benefits. Just as public accommodations laws prohibit racial exclusion even if it is rooted in expressive beliefs—and have been upheld in cases like Roberts v. United States Jaycees,[375]—the Act imposes civil rights obligations on government-serving technologies without dictating ideology. Such AI safeguards are not censorship; they are civil rights protections.
Critics may also claim that race-conscious audits or fairness metrics violate Equal Protection, especially after Students for Fair Admissions v. Harvard.[376] But the Act does not allocate benefits or burdens based on race. Policies do not trigger strict scrutiny simply because they detect discrimination and adopt alternatives that are less discriminatory toward underrepresented groups.[377] If the Court ultimately interprets Equal Protection so narrowly that government cannot even study or consider the racial impact of its actions, institutional reforms—including court reform—may be necessary to better align the judiciary with the needs of a racially diverse democracy.[378] Until then, the Act remains firmly within existing constitutional precedent.
5. Polarization, Cultural Anxiety, and Backlash
Some who are agnostic to questions of access, culture, and history may argue that embedding racial, linguistic, and cultural pluralism into AI law politicizes technology, fosters polarization, and privileges certain groups over a presumed neutral norm. They may contend that considering discriminatory impact risks reinforcing group distinctions and eroding civic unity.
But pluralism is not the enemy of national unity—it is its foundation. The Equitable AI Act ensures that government AI systems serve all Americans, not just those who resemble historical majorities. Ignoring race and ethnicity in AI design does not create neutrality—it entrenches the values, language, and data of those already most represented. The Act strengthens democratic legitimacy by ensuring that all communities have a stake in how AI systems are built and governed. In a racially inclusive democracy, inclusion is not ideological—it is a condition of equal citizenship.
Critics may also argue that codifying equity and pluralism into AI law risks inflaming tensions or fueling ethnonationalist backlash.[379] But fear of backlash is no reason to retreat. Legislative inertia in anticipation of reaction has historically deepened exclusion. The Equitable AI Act is necessary because it affirms equal dignity, treatment, and voice at a time when AI threatens to harden inequality. Rather than capitulate to cultural retrenchment, the Act offers a forward-looking framework rooted in democratic values. It does not provoke division—it responds to it by insisting that government AI reflect the pluralism that already defines the nation.
Conclusion
Artificial intelligence will not remain neutral in the face of political, demographic, and cultural shifts—it will either reinforce existing power structures or help build a more inclusive democracy. The Trump Administration’s AI policies reveal how the technology can be harnessed to advance ethnonationalist aims: entrenching racial hierarchy, erasing cultural pluralism, and legitimizing exclusion under the guise of neutrality and innovation. Yet this future is not inevitable.
The Equitable AI in Government Act offers a clear alternative. By embedding fairness, pluralism, transparency, and authenticity into the architecture of AI governance, the Act challenges the notion that we must choose between technological progress and a racially inclusive society. The United States must lead in both. The task ahead is to develop legal structures that ensure AI better reflects the diversity, dignity, and democratic aspirations of all people.
Exec. Order No. 14,148, 90 Fed. Reg. 8237 (Jan. 20, 2025) (repealing Executive Order 14,110 and dozens of other Biden Administration executive orders); Exec. Order No. 14,110, 88 Fed. Reg. 75191 (Nov. 1, 2023) [hereinafter 2023 AI Order]; see Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, White House Off. of Sci. & Tech. Pol’y (2022), https://bidenwhitehouse.archives.gov/ostp/ai-bill-of-rights/ [https://perma.cc/M5B3-N8SE] (asserting that “[y]ou should not face discrimination by algorithms and systems should be used and designed in an equitable way.”).
Exec. Order No. 14,179, 90 Fed. Reg. 8741 (Jan. 23, 2025) [hereinafter January 2025 AI Order].
Exec. Order No. 14,319, 90 Fed. Reg. 35389 (July 23, 2025).
Exec. Order No. 14,365, 90 Fed. Reg. 58499 (Dec. 11, 2025).
See Morgan Zimmerman (@morganzimmerman1) & Varoon Mathur (@varoonmathuromb), 2024 Federal Artificial Intelligence Use Case Inventory, Github, https://github.com/ombegov/2024-Federal-AI-Use-Case-Inventory [https://perma.cc/N8M2-2N9B] (last visited Jan. 9, 2026) (compiling over 2100 AI use cases reported by 41 federal agencies, including 351 “rights-impacting and/or safety-impacting use cases”).
DHS/CBP/PIA-006 Automated Targeting System, U.S. Dep’t of Homeland Sec., https://www.dhs.gov/publication/automated-targeting-system-ats-update [https://perma.cc/Y678-JL6H] (last updated Dec. 11, 2024) (describing ATS as a DHS decision support tool that compares traveler, cargo, and conveyance data and generates risk assessments used to identify individuals and shipments that may require additional scrutiny); Rachel Levinson-Waldman & José Guillermo Gutiérrez, DHS Must Overhaul Its Flawed Automated Systems, Brennan Ctr. for Just. (Oct. 24, 2023), https://www.brennancenter.org/our-work/analysis-opinion/dhs-must-overhaul-its-flawed-automated-systems [https://perma.cc/3PDV-A29W] (noting that CBP’s Automated Targeting System is “an algorithmically powered analytical database” that creates risk profiles used to trigger additional government scrutiny).
Artificial Intelligence Use Case Inventory, U.S. Dep’t of Homeland Sec., https://data.aclum.org/storage/2025/01/DHS_www_dhs_gov_data_AI_inventory.pdf [https://perma.cc/7YUK-ZYTJ] (last visited Jan. 13, 2025) (listing multiple machine learning and natural language processing automated systems used by USCIS for text analytics, forecasting, evidence classification, and other functions); Steven Hubbard, Invisible Gatekeepers: DHS’ Growing Use of AI in Immigration Decisions, Am. Immigr. Council (May 9, 2025), https://www.americanimmigrationcouncil.org/blog/invisible-gatekeepers-dhs-growing-use-of-ai-in-immigration-decisions/ [https://perma.cc/Y6YR-7FFX] (noting DHS’s publicly disclosed inventory of 105 active AI use cases, including screening, biometric identification, and fraud detection systems across CBP, ICE, and USCIS).
Using AI to Secure the Homeland, U.S. Dep’t of Homeland Sec., https://www.dhs.gov/ai/using-ai-to-secure-the-homeland [https://perma.cc/XT7B-VWF9] [hereinafter AI & Homeland Security] (last updated May 28, 2025) (describing TSA’s use of machine learning models for passenger identification, risk assessment, and prohibited item detection); DHS/TSA/PIA-018 TSA Secure Flight Program, U.S. Dep’t of Homeland Sec., https://www.dhs.gov/publication/dhstsapia-018-tsa-secure-flight [https://perma.cc/QR72-3GFZ] (last updated June 16, 2025) [hereinafter TSA Secure Flight Program] (describing Secure Flight as a risk-based passenger prescreening program used to match passenger data against federal watch lists before boarding).
See Lauren Loricchio, Oversight of IRS AI and Data Analytics Faces Setback, Tax Notes (July 14, 2025), https://www.taxnotes.com/featured-news/oversight-irs-ai-and-data-analytics-faces-setback/2025/07/11/7srgx [https://perma.cc/R8NS-LQPE] (reporting on a study finding that IRS algorithms were associated with Black taxpayers who claimed the Earned Income Tax Credit being three to five times more likely to be audited than non-Black taxpayers).
See Amber Tran, Katie Adams & Maya Sandalow, Mapping the Rise of AI in Federal Health Agencies, Bipartisan Pol’y Ctr. (Aug. 10, 2025), https://bipartisanpolicy.org/article/mapping-the-rise-of-ai-in-federal-health-agencies/ [https://perma.cc/9YAZ-UVVE].
AI Inventory, U.S. Dep’t of Just., https://www.justice.gov/ai/ai-inventory [https://perma.cc/6EHZ-UHLM] (last updated Jan. 21, 2025) (listing 241 Department of Justice–reported AI use cases in 2024 across components, including 124 that are safety and/or rights-impacting).
See Raphael Satter & Humeyra Pamuk, US State Department Cable Says Agency Using AI to Help Staff Job Panels, Reuters (June 9, 2025, at 17:45 ET), https://www.reuters.com/world/us/us-state-department-cable-says-agency-using-ai-help-staff-job-panels-2025-06-09/ [https://perma.cc/H2Z6-UXZN].
Meet Emma, Our Virtual Assistant, U.S. Citizenship & Immigr. Servs., https://www.uscis.gov/tools/meet-emma-our-virtual-assistant [https://perma.cc/JLR2-JXF5] (last updated Apr. 13, 2018) (describing “Emma” as USCIS’s online virtual assistant that answers questions and guides users around the agency’s website).
Machine Algorithm for Report Surveillance (MARS), U.S. Dep’t of Veterans Affairs (Sep. 1, 2022), https://department.va.gov/ai/inventory-item/machine-algorithm-for-report-surveillance-mars/ [https://perma.cc/N5RN-MN8U] (last visited Jan. 9, 2026).
GSA Announces New Partnership with OpenAI, Delivering Deep Discount to ChatGPT Gov-Wide Through MAS, U.S. Gen. Servs. Admin. (Aug. 6, 2025), https://www.gsa.gov/about-us/newsroom/news-releases/gsa-announces-new-partnership-with-openai-delivering-deep-discount-to-chatgpt-08062025 [https://perma.cc/BHV7-PXBN] (announcing that GSA will make ChatGPT Enterprise available to participating federal agencies for a nominal fee of $1 as part of the federal AI Action Plan).
Memorandum from Russell T. Vought, Director, Off. of Mgmt. & Budget (Apr. 3, 2025), https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-21-Accelerating-Federal-Use-of-AI-through-Innovation-Governance-and-Public-Trust.pdf [https://perma.cc/9L59-HPDQ] [hereinafter 2025 Use Memo] (repealing and replacing Memorandum from Shalanda D. Young, Director, Off. of Mgmt. & Budget (Mar. 28, 2024), https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf [https://perma.cc/P75B-X5F9] [hereinafter 2024 Use Memo]); Memorandum from Russell T. Vought, Director, Off. of Mgmt. & Budget (Apr. 3, 2025), https://www.whitehouse.gov/wp-content/uploads/2025/02/M-25-22-Driving-Efficient-Acquisition-of-Artificial-Intelligence-in-Government.pdf [https://perma.cc/Q2V6-54W5] [hereinafter 2025 Acquisition Memo] (repealing and replacing Memorandum from Shalanda D. Young, Director, Off. of Mgmt. & Budget (Sep. 24, 2024), https://whitehouse.gov/wp-content/uploads/2024/10/M-24-18-AI-Acquisition-Memorandum.pdf [https://perma.cc/F24M-MH7H] [hereinafter 2024 Acquisition Memo]).
See Will Knight, Under Trump, AI Scientists Are Told to Remove ‘Ideological Bias’ from Powerful Models, Wired (Mar. 14, 2025, at 19:29 ET), https://www.wired.com/story/ai-safety-institute-new-directive-america-first/ [https://perma.cc/ET5H-GAX7].
See discussion infra Section II.A and I.B.2.
See Danielle Keats Citron & Spencer Overton, Digital Nationalism, 174 U. Penn. L. Rev. (forthcoming 2026) (further developing the concept of digital ethnonationalism by examining how algorithmic systems, platform governance, and state technology policy embed racial, religious, and cultural hierarchy into the infrastructure of public life); Georgios Samaras, The Digital Ethnonation: Multimodal Extreme-Right Propaganda and National Identity on YouTube, 31 Mediterranean Pol., Aug. 2025, at 2, https://doi.org/10.1080/13629395.2025.2545670 [https://perma.cc/2GSX-MGLC] (defining digital ethnonationalism as “the adaptation of nationalist ideology to online platforms, intensified by emotional mobilization and multimodal rhetoric to radicalize users”); Sabina Mihelj & César Jiménez-Martínez, Digital Nationalism: Understanding the Role of Digital Media in the Rise of ‘New’ Nationalism, 27 Nations & Nationalism 331, 331 (2021), https://eprints.lse.ac.uk/120036/1/Nations_and_Nationalism_2021_Mihelj.pdf [https://perma.cc/UE55-URKY] (arguing that digital media routinely reproduce national belonging through “the architecture of internet domains, the bias of algorithms and the formation of national digital ecosystems” thereby reinforcing the “sense of belonging to a world of nations” and creating conditions for more fragmented and exclusionary forms of nationalism in the digital realm).
See Laura Weidinger et al., Ethical and Social Risks of Harm from Language Models, DeepMind, 2021, at 11, https://arxiv.org/pdf/2112.04359 [https://perma.cc/8VWD-K378]; Ngozi Okidegbe, To Democratize Algorithms, 69 UCLA L. Rev. 1688, 1710–11 (2023) (addressing how the use of algorithms opposes legitimate state practices and breaks down democratic participation); White House Off. of Sci. & Tech. Pol’y, supra note 1 (detailing examples of discriminatory harms in various sectors of automated systems).
See Taylor Sorensen et al., A Roadmap to Pluralistic Alignment, 5, arXiv (Feb. 7, 2024), https://arxiv.org/abs/2402.05070 [https://perma.cc/TW2H-5XRC]; Mitchell L. Gordon et al., Jury Learning: Integrating Dissenting Voices into Machine Learning Models, 7–16, arXiv (Feb. 7, 2022), https://arxiv.org/abs/2202.02950 [https://perma.cc/9VKM-T8CY]; Taylor Sorensen et al., Value Kaleidoscope: Engaging AI with Pluralistic Human Values, Rights, and Duties, arXiv (Sep. 2, 2023), at 2–4, https://arxiv.org/abs/2309.00779 [https://perma.cc/6H34-9XYP].
See Nestor Maslej et al., Stan. Univ. Inst. for Hum.-Centered A.I., A.I. Index Report 12 (2025), https://hai.stanford.edu/assets/files/hai_ai_index_report_2025.pdf [https://perma.cc/U4KE-7WAA] (“The United States continues to be the leading source of notable AI models.”).
See Robert Schertzer & Eric Taylor Woods, The New Nationalism in America and Beyond: The Deep Roots of Ethnic Nationalism in the Digital Age 2, 40–41 (2022) (explaining that ethnonationalism involves a belief system that only certain groups of people belong to the nation state); Bart Bonikowski, Ethno-nationalist Populism and the Mobilization of Collective Resentment, 68 British J. Socio. 181, 187 (2017) (explaining that ethnonationalists believe that only people with “appropriate immutable, or at least highly persistent, traits, such as national ancestry, native birth, majority religion, [or] dominant racial group membership” are deemed legitimate members of the nation); Lorenzo Marsili, Ethnonationalism in a Multipolar World, Green Eur. J. (Dec. 5, 2025), https://www.greeneuropeanjournal.eu/ethnonationalism-in-a-multipolar-world/ [https://perma.cc/P6N5-KMGJ] (“[T]oday’s great technological, social, and geopolitical transformations are triggering the rise of ethnonationalist attitudes everywhere across the globe.”).
See, e.g., Bartosz Brzeziński, Max Griera & Hanne Cokelaere, Europe Cracks Down on Migration. The Far Right Is Cheering, Politico (Mar. 11, 2025, at 18:56 ET), https://www.politico.eu/article/europe-migration-crackdown-far-right-deportations/ [https://perma.cc/G2CE-NGY6]; Joel K. Bourne Jr., As America Changes, Some Anxious Whites Feel Left Behind, Nat’l Geographic (Mar. 12, 2018), https://www.nationalgeographic.com/magazine/article/race-rising-anxiety-white-america [https://perma.cc/5ZRL-UJQR]. See generally Ashley Jardina, White Identity Politics (Cambridge Univ. Press 2019) (discussing identity politics); Eric Kaufmann, Whiteshift: Populism, Immigration, and the Future of White Majorities (2019) (discussing the impact of immigration on racial majorities); Robert Schertzer, Understanding Today’s Populism as Ethnic Nationalism, Migration Pol’y Ctr. Blog (Feb. 21, 2020), https://migrationpolicycentre.eu/understanding-todays-populism-ethnic-nationalism/ [https://perma.cc/4YHB-8PKK] (stating that populism is “among the most studied political phenomenon.”); Ishaan Tharoor, The Cultural Anxiety Fueling France’s Protests, Brexit and Trump, Wash. Post (Dec. 10, 2018), https://www.washingtonpost.com/world/2018/12/10/cultural-anxiety-fueling-frances-protests-brexit-trump/ [https://perma.cc/BQD9-2CF7] (describing the relationship between the Trump Administration and European Union).
See discussion infra Section I.A.
See discussion infra Section I.B.
See discussion infra Section II.B.1.
The Article builds on seminal work of other leading scholars on race, data privacy, and algorithmic bias. See, e.g., Anita L. Allen, Dismantling the “Black Opticon”: Privacy, Race Equity, and Online Data-Protection Reform, 131 Yale L.J. F. 907, 907 (2022); Jessica Eaglin, Racializing Algorithms, 111 Calif. L. Rev. 753, 757 (2023); Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 Calif. L. Rev. 671, 674 (2016).
Ethnonationalism may overlap with adjacent ideologies. See, e.g., Kenneth P. Vickery, ‘Herrenvolk’ Democracy and Egalitarianism in South Africa and the U.S. South, 16 Compar. Stud. Soc’y & Hist. 309, 309 (1974) (defining Herrenvolk Democracy as “a parliamentary regime in which the exercise of power and suffrage is restricted . . . to the dominant group”); Lila Thulin, There’s a Term for Trump’s Political Style: Authoritarian Populism, U.C. Berkeley News (Jan. 21, 2025), https://news.berkeley.edu/2025/01/21/theres-a-term-for-trumps-political-style-authoritarian-populism/ [https://perma.cc/48TX-VLRR] (contrasting pure authoritarianism with “authoritarian populism,” which focuses on nativism and opposes pluralism); Understanding White Christian Nationalism, Yale Inst. for Soc. & Pol’y Studs. (Oct. 4, 2022), https://isps.yale.edu/news/blog/2022/10/understanding-white-christian-nationalism [https://perma.cc/H8S6-USZC] (tracing the history of White Christian Nationalism and explaining its relationship with other ideologies).
See Robert L. Tsai, Immigration Unilateralism and American Ethnonationalism, 51 Loy. U. Chi. L.J. 523, 533–35 (2020) (describing how in the United States immigrants were initially excluded from the dominant culture and later were evaluated as “prospective future Americans” provided they assimilated).
See Ethno-Nationalism Denies Millions Their Citizenship Rights - Anti-Racism Expert, U.N. Hum. Rts. Council: Off. of the High Comm’r (July 5, 2018), https://www.ohchr.org/en/stories/2018/07/ethno-nationalism-denies-millions-their-citizenship-rights-anti-racism-expert [https://perma.cc/6K6Q-A7EG] (“In the past, European countries relied on ethno-nationalism to exclude populations in their colonies from effective citizenship . . . . Today, migrants are the new targets . . . often under the pretext of ethnic purity and religious, cultural or linguistic preservation.”).
The U.S. Constitution subsidized slavery by: (1) inflating the power of slaveholding states by counting three-fifths of their enslaved populations for apportionment purposes, see U.S. Const. art. I, § 2, cl. 3; id. art. II, § 1, cl. 3; (2) initially preventing Congress from prohibiting states from importing Black people to serve as slaves until 1808, see id. art. I, § 9, cl. 1; and (3) giving enslavers the right to capture Black people who had escaped to free states. See id. art. IV, § 2, cl. 3; see also Juan Perea, Race and Constitutional Law Casebooks: Recognizing the Proslavery Constitution, 110 Mich. L. Rev. 1123, 1135 (2012) (reviewing George William Van Cleve, A Slaveholders’ Union (Univ. Chicago Press 2010)).
Almost all states—including those outside of the South—limited voting to white males or would eventually do so. See Cal. Const. art. II, § 1 (1849) (explicitly limiting suffrage to white males); Alexander Keyssar, The Right to Vote: The Contested History of Democracy in the United States 55 (Basic Books 2000) (“[E]very state that entered the union after 1819 prohibited blacks from voting . . . .”).
See, e.g., Naturalization Act of 1790, ch. 3, Pub. L. No. 103, 1 Stat. 103 (explicitly limiting U.S. citizenship to “free white persons”).
See Chinese Exclusion Act, ch. 126, 22 Stat. 58 (1882); Chae Chan Ping v. United States, 130 U.S. 581, 595 (1889) (upholding Chinese Exclusion Act and finding that Chinese laborers that had not assimilated to American culture “conflict[ed]” with whites and posed competition that resulted in “irritation, proportionately deep and bitter”); Juan Perea, Immigration Policy as a Defense of White Nationhood, 12 Geo. J.L. & Mod. Critical Race Persp. 1, 4 (2020).
Johnson v. M’Intosh, 21 U.S. 583, 592 (1823) (holding that the United States acquired title over land from Tribes through the doctrine of conquest); Dred Scott v. Sandford, 60 U.S. 393, 407 (1857) (finding that Black people had “no rights which the white man was bound to respect.”).
See Eric Foner, Reconstruction: America’s Unfinished Revolution 1863–1877 538–92 (Harper Perennial Modern Classics 2014).
Id.; see Keyssar, supra note 33, at 105–07.
Eric S. Yellin, Racism in the Nation’s Service 1–2 (Univ. of N.C. Press 2013) (detailing the Wilson Administration’s 1913 drive to segregate the federal government workforce).
United States v. Wong Kim Ark, 169 U.S. 649, 732 (1898).
See, e.g., Takao Ozawa v. United States, 260 U.S. 178, 198 (1922) (holding that a Japanese immigrant was ineligible to become a naturalized U.S. citizen); United States v. Bhagat Singh Thind, 261 U.S. 204, 214 (1923) (limiting those eligible to become naturalized citizens as “white” persons to those of “European parentage”).
National Origins Act of 1924, Pub. L. No. 68-139, 43 Stat. 153 (establishing immigration quotas that heavily favored immigrants from Europe and banned immigrants from Asia).
See Juan F. Perea et al., Race and Races: Cases and Resources for a Diverse America 294–95 (3d ed. West Acad. Publ’g 2015).
Downes v. Bidwell, 182 U.S. 244, 282, 286 (1901).
Elk v. Wilkins, 112 U.S. 94, 122–23 (1884).
See, e.g., H.R. Con. Res. 108, 83d Cong., 67 Stat. B132 (1953) (ending the federal trust relationship and recognition of Tribal sovereignty); Act of Aug. 15, 1953, Pub. L. No. 83-280, 67 Stat. 588 (transferring federal jurisdiction over Indian Country in several states without Tribal consent).
See Steven Levitsky, The Third Founding: The Rise of Multiracial Democracy and the Authoritarian Reaction Against It, 110 Calif. L. Rev. 1991, 1991 (2022) (“A multiracial democracy is simply a democracy in a diverse society in which . . . the rights of individuals of all ethnic groups are protected equally.”); Danielle Allen & E. Glen Weyl, The Real Dangers of Generative AI, 35 J. Democracy 147, 147 (2024) (defining “plural societies” as “free and democratic societies operating under conditions of social diversity”).
See Maxine Burkett, Litigating Separate and Equal: Climate Justice and the Fourth Branch, 72 Stan. L. Rev. Online 145, 152 (2020) (“[T]he Civil Rights Act of 1964 and the Voting Rights Act of 1965 . . . effectively ended U.S. de jure racial segregation.”); Title VII of the Civil Rights Act of 1964, 42 U.S.C. §§ 2000e-2000e-17 (prohibiting employment discrimination on the basis of race, color, religion, sex, and national origin); 42 U.S.C. § 2000d (doing the same for federal financial assistance).
Voting Rights Act of 1965, Pub. L. No. 89–110, § 2, 79 Stat. 437, 437 (codified as amended at 52 U.S.C. § 10301(a)).
Rosina Lozano, Vote Aquí Hoy: The 1975 Extension of the Voting Rights Act and the Creation of Language Minorities, 35 J. Pol’y Hist. 68, 68–69 (2022).
8 U.S.C. § 1152(a)(1)(A), (4)–(5).
Although the Immigration and Nationality Act removed some racial restrictions, it introduced restrictions on immigration from the “Western hemisphere,” which reduced migration from Latin America. See Kevin R. Johnson, Fear of an “Alien Nation”: Race, Immigration, and Immigrants, 7 Stan. L. & Pol’y Rev. 111, 112 (1996).
Fair Housing Act of 1968, 42 U.S.C. §§ 3601–3619. Though residential segregation has decreased over time, this change has been modest. See Tracy Hadden Loh, Christopher Coes & Becca Buthe, The Great Real Estate Reset, Brookings Inst. (Dec. 16, 2020), https://www.brookings.edu/articles/trend-1-separate-and-unequal-neighborhoods-are-sustaining-racial-and-economic-injustice-in-the-us/ [https://perma.cc/8KDH-FBLG] (“[T]he neighborhood of an average white resident in the 100 largest metropolitan areas became slightly less white between 2000 and 2018, decreasing from 79% white to 71%.”).
Contrast Levitsky, supra note 47, at 1992–93 (discussing modern trends toward increased support for racially inclusive democracy), with Samuel P. Huntington, Who Are We? The Challenges to America’s National Identity xvi (Simon & Schuster Paperbacks, 2004) (arguing that American culture is defined by Anglo-Protestant values, and that this culture is threatened by immigrants who do not entirely assimilate into it).
Feyisayo Oyolola & Jeanne Batalova, European Immigrants in the United States, Migration Pol’y Inst. (Jan. 11, 2024), https://www.migrationpolicy.org/article/european-immigrants-united-states-2022 [https://perma.cc/2WW9-69BY].
Paul Taylor & D’Vera Cohn, A Milestone En Route to a Majority Minority Nation, Pew Rsch. Ctr. (Nov. 7, 2012), https://www.pewresearch.org/social-trends/2012/11/07/a-milestone-en-route-to-a-majority-minority-nation/ [https://perma.cc/LA6K-GFC7]. But see D’Vera Cohn, Census History: Counting Hispanics, Pew Rsch. Ctr. (Mar. 3, 2010), https://www.pewresearch.org/social-trends/2010/03/03/census-history-counting-hispanics-2/ [https://perma.cc/QUD6-E5NH] (noting that the 1970 Census was the first to attempt to record the size of the Hispanic population).
Population Estimates, July 1, 2024, U.S. Census Bureau (Dec. 24, 2024), https://web.archive.org/web/20250202050945/https:/www.census.gov/quickfacts/fact/table/US/PST045224 [https://perma.cc/UG5K-LS4Y].
William H. Frey, The US Will Become ‘Minority White’ in 2045, Census Projects, Brookings inst. (Mar. 14, 2018), https://www.brookings.edu/articles/the-us-will-become-minority-white-in-2045-census-projects/ [https://perma.cc/NG5D-P9J9].
R. Eric Petersen et al., Cong. Rsch. Serv., RL42365, Representatives and Senators: Trends in Member Characteristics Since 1945 (2014).
Katherine Schaeffer, 119th Congress Brings New Growth in Racial, Ethnic Diversity to Capitol Hill, Pew Rsch. Ctr. (Jan. 21, 2025), https://www.pewresearch.org/short-reads/2025/01/21/119th-congress-brings-new-growth-in-racial-ethnic-diversity-to-capitol-hill/ [https://perma.cc/DDE8-4EMN].
Alex Seitz-Wald, Obama Had a Coalition. Biden Built a New One and Here’s How It’s Different, NBC News (Oct. 30, 2020, at 12:38 ET), https://www.nbcnews.com/politics/2020-election/obama-had-coalition-biden-built-new-one-here-s-how-n1245431 [https://perma.cc/45NV-BWCY].
See New Public Agenda Report: Americans Widely Agree on Racial Equality, But Differ Over the Impacts of Racism and How to Address It, Pub. Agenda (June 15, 2023), https://publicagenda.org/news/new-public-agenda-report-americans-widely-agree-on-racial-equality-but-differ-over-the-impacts-of-racism-and-how-to-address-it/ [https://perma.cc/SP5P-ZC4Y] (noting that “[n]early two-thirds (65%) of Americans believe that overcoming racism requires changes in laws and institutions as well as in individual attitudes”).
See New Civil Rights Monitor Poll Finds 73 Percent of Voters Worried About Political Violence, The Leadership Conf. on Civ. & Hum. Rts. (Oct. 7, 2024), https://civilrights.org/2024/10/07/civil-rights-monitor-poll-2024/ [https://perma.cc/JR6E-WFS6] (“79 percent [of respondents] continue to say that America’s diversity makes us stronger.”).
Though economic inequality has improved since 1960, the racial wealth gap persists and has even widened since 1980. See Dedrick Asante-Muhammad et al., Still a Dream: Over 500 Years to Black Economic Equality, Inst. for Pol’y Stud. (Aug. 16, 2023), https://ips-dc.org/report-still-a-dream-500-years-black-economic-equality/ [https://perma.cc/54MK-Z6J4] (finding that the poverty rate for Black Americans has dropped considerably since 1963, but that the wealth gap has only slightly narrowed).
See Zoltan L. Hajnal, Dangerously Divided: How Race and Class Shape Winning and Losing in American Politics 29–30 (Cambridge Univ. Press 2020).
Jardina, supra note 24, at 5–6.
Id. at 215.
See David A. Graham, Trump’s White Identity Politics Appeals to Two Different Groups, The Atl. (Aug. 8, 2019), https://www.theatlantic.com/ideas/archive/2019/08/who-does-trumps-white-identity-politics-reach/595189/ [https://perma.cc/GP8N-BQ2K].
Jardina, supra note 24, at 3–4, 267.
See, e.g., Okla. Stat. Ann. tit. 70, § 24–157 (West 2021) (restricting gender and race-related diversity training in higher education); Fla. Stat. Ann. § 1004.06(2)(a) (West 2023) (prohibiting public colleges from maintaining DEI programs). See generally The Heritage Found., Mandate For Leadership: The Conservative Promise (2023) (discussing policy book produced by over 100 organizations to prepare for conservative administration, also known as “Project 2025”).
See discussion infra Part II.
See Nikole Hannah-Jones, How Trump Upended 60 Years of Civil Rights in Two Months, N.Y. Times Mag. (June 27, 2025), https://www.nytimes.com/2025/06/27/magazine/trump-civil-rights-law-discrimination.html [https://perma.cc/F3JT-5QCN] (detailing how the second Trump Administration dismantled key civil rights protections); Trump Administration Civil and Human Rights Rollbacks, The Leadership Conf. on Civ. and Hum. Rts., https://civilrights.org/trump-rollbacks/ [https://perma.cc/Q7XN-FR7Q] (detailing how the second Trump Administration dismantled key civil rights protections).
See, e.g., Janis Bowdler & Benjamin Harris, Racial Inequality in the United States, U.S. Dep’t of the Treasury (July 21, 2022), https://home.treasury.gov/news/featured-stories/racial-inequality-in-the-united-states [https://perma.cc/WRR9-E2HX] (providing an analysis of racial disparities across health, education, and income in the United States); Rates of High School Completion and Bachelor’s Degree Attainment Among Persons Age 25 and over, by Race/Ethnicity and Sex: Selected Years, 1910 Through 2020, Nat’l Ctr. for Educ. Stat. tbl.104.10, https://nces.ed.gov/programs/digest/d19/tables/dt19_104.10.asp [https://perma.cc/H535-5JLF] (detailing the racial education attainment gap); Labor Force Characteristics by Race and Ethnicity, 2019, Bureau of Lab. Stat. (Dec. 2020), https://www.bls.gov/opub/reports/race-and-ethnicity/2019/ [https://perma.cc/WMB8-G4CA] (indicating that in 2019, the unemployment rate was 6.1% for Black adults compared to 3.3% for white adults).
See, e.g., Exec. Order No. 14,173, 90 Fed. Reg. 8633 (Jan. 21, 2025) (“Illegal DEI and DEIA policies . . . undermine our national unity . . . [through a] pernicious identity-based spoils system.”); Exec. Order No. 14,281, 90 Fed. Reg. 17537 (Apr. 23, 2025) (declaring, without basis, that a “pernicious movement” has transformed equal opportunity into a regime of “results preordained by irrelevant immutable characteristics”).
See, e.g., Exec. Order No. 14,148, 90 Fed. Reg. 8237, 8240 (alleging, without evidence, that DEI has “corrupted” government).
600 U.S. 181, 230 (2023) (holding that certain race-conscious admissions programs violate the Equal Protection Clause and Title VI).
Coal. for TJ v. Fairfax Cnty. Sch. Bd., 68 F.4th 864, 882 (4th Cir. 2023) (holding that facially race-neutral admissions policies that resulted in increased minority enrollment did not violate the Equal Protection Clause), cert. denied, 218 L. Ed. 2d 71 (2024).
Exec. Order No. 14,151, 90 Fed. Reg. 8339 (Jan. 20, 2025). The Trump Administration enacted similar directives in subsequent policy pronouncements. See, e.g., Keeping Americans Safe in Aviation, 90 Fed. Reg. 8651 (Jan. 21, 2025) (directing the cessation of DEI initiatives in the Federal Aviation Administration); Exec. Order No. 14,185, 90 Fed. Reg. 8763 (Jan. 27, 2025) (directing the cessation of DEI initiatives in the Department of Defense); Exec. Order No. 14,210, 90 Fed. Reg. 9669 (Feb. 11, 2025) (instructing agency heads to prioritize DEI initiatives in employee reduction-in-force efforts). The Trump policies reversed Biden Administration policies that addressed inequitable practices within the federal government. See Exec. Order No. 13,985, 86 Fed. Reg. 7009 (Jan. 20, 2021) (directing federal agencies to embed equity in their programs and policies); Exec. Order No. 14,035, 86 Fed. Reg. 34593 (June 25, 2021) (requesting each agency head to appoint a DEI officer).
Letter from Russell T. Vought, Director, Off. of Mgmt. and Budget, to Susan Collins, Chair, Committee on Appropriations (May 2, 2025), https://www.whitehouse.gov/wp-content/uploads/2025/05/Fiscal-Year-2026-Discretionary-Budget-Request.pdf [https://perma.cc/9XWA-VQE7] (“The Budget also eliminates funding for the National Institute on Minority and Health Disparities (-$534 million), which is replete with DEI expenditures.”).
Melissa Angell, Trump Severs Funding for Minority Business Centers as He Dismantles the MBDA, Inc. (Apr. 21, 2025), https://www.inc.com/melissa-angell/trump-severs-funding-for-minority-business-centers-as-he-dismantles-the-mbda/91178803 [https://perma.cc/FG24-HVHA]. But see Jory Heckman, Federal Court Blocks Trump Administration’s Plan to Scrap 4 Small Agencies, Fed. News Network (Nov. 26, 2025, at 16:32 ET), https://federalnewsnetwork.com/reorganization/2025/11/federal-court-blocks-trump-administrations-plan-to-scrap-4-small-agencies/ [https://perma.cc/N3C7-JPMR] (reporting that a federal court in Rhode Island blocked the Administration’s plan to eliminate the Minority Business and Development Agency).
Exec. Order No. 14,238, 90 Fed. Reg. 13043 (Mar. 14, 2025) (reducing the functions and personnel of the Minority Business Development Agency and the Community Development Financial Institutions Fund to the minimum required by statute).
T. Scott Kelly, Christopher J. Near & Zachary V. Zagger, Trump Administration Proposes Elimination of OFCCP, Launches New Opinion Letter Program for Labor Guidance, Ogletree Deakins (June 4, 2025), https://ogletree.com/insights-resources/blog-posts/trump-administration-proposes-elimination-of-ofccp-launches-new-opinion-letter-program-for-labor-guidance/ [https://perma.cc/MT9J-57DV] (describing the Department of Labor’s proposal to “eliminate” the Office of Federal Contract Compliance Programs, which enforced antidiscrimination compliance in federal contracting).
Jory Heckman, EPA’s ‘Environmental Justice’ Employees Face Layoffs This Summer, Fed. News Network (Apr. 22, 2025, at 07:15 ET), https://federalnewsnetwork.com/workforce/2025/04/epas-environmental-justice-employees-face-layoffs-this-summer/ [https://perma.cc/2H9W-L6H5]; Aman Azhar, With Latest Round of Terminations, Trump Administration Continues Dismantling EPA’s Environmental Justice Portfolio, Inside Climate News (Aug. 28, 2025), https://insideclimatenews.org/news/28082025/trump-epa-environmental-justice-terminations/ [https://perma.cc/9T9L-2QCV] (“The U.S. Environmental Protection Agency this week terminated more than two dozen remaining staffers in the now-defunct Office of Environmental Justice and External Civil Rights (OEJECR), advancing the Trump administration’s efforts to dismantle the environmental justice initiatives of the president’s Democratic predecessors.”).
Keely Quinlan, State, Local Organizations Ask Commerce Dept. to Reinstate Digital Equity Act, StateScoop (June 20, 2025), https://statescoop.com/state-local-commerce-ntia-reinstate-digital-equity-act/ [https://perma.cc/RW2G-Q36A].
Letter from Craig Trainor, Acting Assistant Sec’y for C.R., Department of Education, to colleagues (Feb. 14, 2025), https://www.ed.gov/media/document/dear-colleague-letter-sffa-v-harvard-109506.pdf [https://perma.cc/T228-ZU58]. The letter was enjoined by a federal court in April of 2025. Nat’l Educ. Ass’n v. U.S. Dep’t of Educ., 779 F. Supp. 3d 149 (D.N.H. 2025); see also Am. Fed’n of Tchrs. v. U.S. Dep’t of Educ., No. SAG-25-628, slip op. at 60–62 (D. Md. Aug. 14, 2025) (holding the February 14, 2025 Dear Colleague Letter and a related requirement that states and schools certify their compliance with the Department of Education’s interpretations of Title VI and Students for Fair Admissions v. Harvard unlawful, and vacating both under the Administrative Procedure Act).
Press Release, U.S. Dep’t of Educ., U.S. Department of Education Opens Investigations into Five Universities for Alleged Exclusionary Scholarships Benefitting Illegal Alien Students (July 23, 2025), https://www.ed.gov/about/news/press-release/us-department-of-education-opens-investigations-five-universities-alleged-exclusionary-scholarships-benefitting-illegal-alien-students; see also Nate Raymond, U.S. Justice Department Sues Virginia Over In-State Tuition for Migrants, Reuters (Dec. 30, 2025, at 12:54 ET), https://www.reuters.com/legal/government/us-justice-department-sues-virginia-over-in-state-tuition-migrants-2025-12-30/ [https://perma.cc/MUM6-F4FH] (reporting DOJ’s lawsuit challenging Virginia’s in-state tuition policy for undocumented students as conflicting with federal immigration law and disadvantaging out-of-state U.S. citizens).
Exec. Order No. 14,279, 90 Fed. Reg. 17529 (Apr. 23, 2025).
Exec. Order No. 14,235, 90 Fed. Reg. 11885 (Mar. 7, 2025) (excluding from public loan forgiveness programs employees engaged in a “substantial illegal purpose” such as “aiding and abetting” immigration law violations or “illegal discrimination”); see Adam S. Minsky, What Trump’s New Student Loan Forgiveness Order Means for 3 Million Borrowers, Forbes (Mar. 10, 2025, at 10:34 ET), https://www.forbes.com/sites/adamminsky/2025/03/10/what-trumps-new-student-loan-forgiveness-order-means-for-3-million-borrowers/ [https://perma.cc/TH5J-R2QE] (noting that the language “engaging in a pattern of aiding and abetting illegal discrimination” could be interpreted “to implicate any . . . public entity that supports DEI initiatives”).
See Exec. Order No. 14,230, 90 Fed. Reg. 11781 (Mar. 6, 2025); Exec. Order No. 14,237, 90 Fed. Reg. 13039 (Mar. 14, 2025); Exec. Order No. 14,246, 90 Fed. Reg. 13997 (Mar. 25, 2025); Exec. Order No. 14,250, 90 Fed. Reg. 14549 (Mar. 27, 2025). The Paul Weiss executive order was rescinded after it agreed to end its diversity, equity, and inclusion practices and commit $40 million in pro bono services. Exec. Order No. 14,244, 90 Fed. Reg. 13685 (Mar. 21, 2025). See also Mike Spector et al., How Trump’s Crackdown on Law Firms is Undermining Legal Defenses for the Vulnerable, Reuters (July 31, 2025, at 06:00 ET), https://www.reuters.com/investigations/trumps-war-big-law-leads-firms-retreat-pro-bono-work-underdogs-2025-07-31/ [https://perma.cc/96Q7-FGL6] (“Dozens of major law firms, wary of political retaliation, have scaled back pro bono work, diversity initiatives and litigation that could place them in conflict with the Trump administration, a Reuters investigation found. . . . Fourteen civil rights groups said the law firms they count on to pursue legal challenges are hesitating to engage with them . . . .”).
Letter from Brendan Carr, Chairman, Fed. Commc’n Comm’n, to Brian Roberts, CEO, Comcast Corp. (Feb. 11, 2025), https://www.fcc.gov/sites/default/files/Chairman-Carr-Letter to-Comcast-02112025.pdf [https://perma.cc/N7UG-WRS8] (ordering an FCC investigation into Comcast and NBCUniversal over alleged “invidious forms of DEI”); Tom Wheeler, Not ‘Deregulation’ but Heavy-Handed Regulation at the Trump FCC, Brookings Inst. (Feb. 25, 2025), https://www.brookings.edu/articles/not-deregulation-but-heavy-handed-regulation-at-the-trump-fcc/ [https://perma.cc/W4AZ-X95C] (criticizing the FCC’s use of threats “to micromanage the activities of [media] companies . . . .”).
See, e.g., Jeff Green, FCC’s Carr Threatens to Block M&A for Companies with DEI (2), Bloomberg L. (Mar. 22, 2025, at 01:39 ET), https://news.bloomberglaw.com/ip-law/fccs-carr-threatens-to-block-m-a-for-companies-with-dei-plans [https://perma.cc/JK9R-YN7N] (describing FCC Chairman Brendan Carr’s warning that the agency may deny merger approvals to companies that maintain DEI initiatives).
David Shepardson, T-Mobile Ending DEI Programs as It Seeks U.S. FCC Approval for 2 Deals, Reuters (July 9, 2025, at 03:00 ET), https://www.reuters.com/sustainability/society-equity/t-mobile-ending-dei-programs-it-seeks-fcc-approval-two-deals-2025-07-09/ [https://perma.cc/435A-8AW9]; Maria Aspan, Verizon Ends DEI Policies to Get FCC’s Blessing for Its $20 Billion Frontier Deal, NPR (May 19, 2025, at 05:00 ET), https://www.npr.org/2025/05/19/nx-s1-5402863/verizon-fcc-frontier-dei-trump [https://perma.cc/EFA2-EMRF]; David Shepardson, AT&T Commits to Ending DEI Programs, Reuters (Dec. 2, 2025, at 04:46 ET), https://www.reuters.com/sustainability/society-equity/att-commits-ending-dei-programs-2025-12-02/ [https://perma.cc/9V8V-QBL3] (reporting AT&T’s commitment to end DEI programs in an FCC filing associated with regulatory approval of a transaction).
Memorandum from Russell T. Vought, Director, Off. of Mgmt. & Budget, to the Heads of Executive Departments and Agencies, Eliminating Funding of Unlawful Discrimination (Sep. 12, 2025), https://www.whitehouse.gov/wp-content/uploads/2025/09/M-25-33-Eliminating-Funding-of-Unlawful-Discrimination.pdf [https://perma.cc/25SM-53ZR].
Id. at 5–9.
Id. at 1.
Lydia Wheeler, Justice Department Using Fraud Law to Target Companies on DEI, Wall St. J. (Dec. 28, 2025, at 09:00 ET), https://www.wsj.com/politics/policy/trump-doj-dei-fraud-investigations-93213d52 [https://perma.cc/WU6T-HMQD].
Exec. Order No. 14,281, 90 Fed. Reg. 17537 (Apr. 23, 2025).
Id.
Rescinding Portions of Department of Justice Title VI Regulations To Conform More Closely With the Statutory Text and To Implement Executive Order 14,281, 90 Fed. Reg. 17537 (Feb. 16, 2023), which ordered the restoration of equality of opportunity and meritocracy; 90 Fed. Reg. 235 (Dec. 10, 2025) (eliminating disparate-impact liability from DOJ’s Title VI regulations by rescinding the requirement that recipients of federal funds refrain from practices with unjustified disparate effects, thereby limiting DOJ Title VI enforcement to intentional discrimination). See also Alex Guillen & Hassan Ali Kanu, DOJ Rolls Back Anti-Discrimination Rules, Politico (Dec. 9, 2025, at 03:58 ET), https://www.politico.com/news/2025/12/09/justice-department-discrimination-disparate-impact-00683362 [https://perma.cc/T2GN-Q52P] (reporting that “repealing the government’s 50-year-old ‘disparate impact’ standards will make it harder to challenge potential bias in housing, criminal law, employment, environmental regulations and other policy areas”).
Claire Savage & Alexandra Olson, Civil Rights Agency Drops a Key Tool Used to Investigate Workplace Discrimination, Associated Press, https://apnews.com/article/trump-discrimination-ai-eeoc-disparate-impact-a2e8aba11f3d3f095df95d488c6b3c40 [https://perma.cc/8JVB-ZWKX] (last updated Sep. 30, 2025, at 06:56 ET) (reporting on EEOC memo that the agency will stop investigating workplace discrimination claims based on disparate-impact theory and will close existing disparate-impact cases).
Letter from Kyle Hauptman, Chairman, Nat’l Credit Union Admin., to Federally Insured Credit Unions (Sep. 4, 2025), https://ncua.gov/regulation-supervision/letters-credit-unions-other-guidance/removal-disparate-impact [https://perma.cc/WAJ5-855W] (announcing that NCUA will no longer examine for disparate-impact risk or request disparate impact analyses in fair-lending supervision).
Exec. Order No. 14,173 (repealing Exec. Order No. 11,246, 30 Fed. Reg. 12319 (Sep. 24, 1965)); see Chris Isidore, Trump Rescinds Measure Used to Fight Workplace Discrimination for 60 Years, CNN (Jan. 23, 2025, at 12:23 ET), https://www.cnn.com/2025/01/23/business/trump-rescinds-anti-discrimination-order [https://perma.cc/Z42N-M4GN] (“[Executive Order No. 11,246] allowed investigations into the contractors’ employment practices and often found instances of discrimination even the affected employees didn’t know about.”).
Exec. Order No. 14,280, 90 Fed. Reg. 17533 (Apr. 23, 2025).
Michelle Diament, Ed Department Plans To Scale Back IDEA Data Collection, Disability Scoop (Sep. 4, 2025), https://www.disabilityscoop.com/2025/09/04/ed-department-plans-to-scale-back-idea-data-collection/31608/ [https://perma.cc/NBH4-7F22] (describing the Department of Education’s proposal to stop publishing “significant disproportionality” data required under the Individuals with Disabilities Education Act, including state-reported data identifying school districts with high rates of students from particular racial groups who have disabilities, are placed in restrictive educational settings, or are subject to discipline); Agency Information Collection Activities; Comment Request; Annual State Application Under Part B of the Individuals with Disabilities Education Act as Amended in 2004, 90 Fed. Reg. 41063 (Aug. 22, 2025), https://www.federalregister.gov/documents/2025/08/22/2025-16051/agency-information-collection-activities-comment-request-annual-state-application-under-part-b-of [https://perma.cc/966T-2FAZ] (seeking public comment under the Paperwork Reduction Act on proposed changes to the IDEA Part B annual state application, including revisions to required data reporting).
EPA Removes EJScreen from Its Website, Env’t Data & Governance Initiative (Feb. 12, 2025), https://envirodatagov.org/epa-removes-ejscreen-from-its-website/ [https://perma.cc/LCA3-23S5] (last visited Feb. 20, 2026); Stacy Woods, The Trump Administration’s Deletion of Environmental Justice Data Does Real Harm, Union of Concerned Scientists (Feb. 27, 2025, at 11:08 ET), https://blog.ucs.org/stacy-woods/the-trump-administrations-deletion-of-environmental-justice-data-does-real-harm/ [https://perma.cc/FQ36-JY66].
Ryan Lucas, 70% of the DOJ’s Civil Rights Division Lawyers Are Leaving Because of Trump’s Reshaping, NPR (May 19, 2025, at 05:00 ET), https://www.npr.org/2025/05/19/g-s1-66906/trump-civil-rights-justice-exodus [https://perma.cc/7C2U-NS36]; Sarah N. Lynch, Ex-Employees of U.S. Justice Department Blast ‘Destruction’ of Civil Rights Unit, Reuters (Dec. 9, 2025, at 15:50 ET), https://www.reuters.com/legal/government/ex-employees-us-justice-department-blast-destruction-civil-rights-unit-2025-12-09/ [https://perma.cc/NES5-D7NU] (reporting that about 75% of attorneys left DOJ’s Civil Rights Division during 2025 through resignations and related departures); Letter from Hannah Abelow et. al., The Destruction of DOJ’s Civil Rights Division: Why it Matters, The Just. Connection (Dec. 9, 2025), https://www.thejusticeconnection.org/wp-content/uploads/2025/12/Civil-Rights-Division-Sign-On-Letter.pdf [https://perma.cc/L2FD-589S] (urging the Department of Justice to preserve the Civil Rights Division’s enforcement capacity and warning that recent administrative actions threaten the enforcement of federal civil rights laws).
Trump Administration Closes Three DHS Offices Focused on Civil Rights and Oversight, Econ. Pol’y Inst. (Apr. 3, 2025), https://www.epi.org/policywatch/trump-administration-closes-three-dhs-offices-focused-on-civil-rights-and-oversight/ [https://perma.cc/PV64-SNPK].
Natalie Alms, Social Security Shutters its Civil Rights and Transformation Offices, Gov’t Exec. (Feb. 26, 2025), https://www.govexec.com/management/2025/02/social-security-shutters-its-civil-rights-and-transformation-offices/403310/ [https://perma.cc/6FFD-XGTU].
Collin Binkley, Civil Rights Work Is Slowing as Trump Dismantles the Education Department, Agency Data Shows, Associated Press (July 18, 2025, at 18:25 ET), https://apnews.com/article/education-department-trump-civil-rights-disability-54c4b4a228b4b30e6a6751ec745b3915 [https://perma.cc/H84G-Q8WY]. But see Education Department Workers Targeted in Layoffs Are Returning to Tackle Civil Rights Backlog, Educ. Week (Dec. 8, 2025, at 09:26 ET), https://federalnewsnetwork.com/workforce/2025/12/education-department-workers-targeted-in-layoffs-are-returning-to-tackle-civil-rights-backlog/#:~:text=Workforce-,Education Department workers targeted in layoffs are returning to tackle [https://perma.cc/2VWQ-P6P6] (reporting that the Administration brought back staff previously slated for layoffs to address a mounting civil-rights complaint backlog).
Benjamin Krause, VA Closes Equity Office Amid Budget Shift: What It Means for Veterans, Disabled Veterans (Apr. 2, 2025), https://www.disabledveterans.org/va-closes-equity-office-amid-budget-shift/ [https://perma.cc/5HRS-CNXD].
Exec. Order No. 14,253, 90 Fed. Reg. 14563 (Mar. 27, 2025).
Id.
Letter from Lindsey Halligan et. al., Special Assistant to the President and Senior Associate Staff Secretary, White House, to Lonnie G. Bunch III, Secretary, Smithsonian Inst., Internal Review of Smithsonian Exhibitions and Materials (Aug. 12, 2025), https://www.whitehouse.gov/briefings-statements/2025/08/letter-to-the-smithsonian-internal-review-of-smithsonian-exhibitions-and-materials/ [https://perma.cc/2LRH-WUEJ] (demanding Smithsonian internal review materials and information regarding exhibitions and programming); Letter from Vince Haley, Assistant to the President and Director of the Domestic Policy Council, White House, to Lonnie G. Bunch III, Secretary, The Smithsonian, Review of Smithsonian Exhibitions and Materials (Dec. 18, 2025), https://www.whitehouse.gov/briefings-statements/2025/12/letter-to-the-smithsonian-review-of-smithsonian-exhibitions-and-materials/ [https://perma.cc/Q99Y-HBPE] (reiterating demands for Smithsonian records and framing compliance as relevant to continued federal engagement and support).
Exec. Order No. 14,253; Ashraf Khalil, Confederate Statues in DC Area to be Restored and Replaced in Line with Trump’s Executive Order, Associated Press (Aug. 9, 2025, at 22:16 CT), https://www.kcci.com/article/confederate-statues-in-dc-area-to-be-restored/65645547 [https://perma.cc/VTJ5-UZ3Q] (reporting restoration/return of Confederate memorials in line with the Administration’s directives); Steve Beynon, Trump Says All Army Bases Stripped of Confederate Namesakes Will Have Names Restored, Military.com (June 10, 2025, at 18:41 ET), https://www.military.com/daily-news/2025/06/10/trump-says-all-army-bases-stripped-of-confederate-namesakes-will-have-names-restored.html [https://perma.cc/TP52-83PU]; Lolita C. Baldor, Army Restores the Names of Seven Bases that Lost Their Confederate-linked Names Under Biden, Associated Press (June 10, 2025, at 20:57 ET), https://apnews.com/article/trump-army-bases-confederate-names-69f63771d0e7ca859d42c485129d1228 [https://perma.cc/M7DW-VE9P] (reporting that the Administration sought to restore pre-2021 Confederate Army base names by reassigning those names to alternative, non-Confederate namesakes (renaming the former Fort Lee for Spanish-American War Medal of Honor recipient Fitz Lee rather than Robert E. Lee), thereby preserving the original base names while formally complying with the statutory requirement that bases no longer honor Confederate figures).
Chloe Veltman, National Park Signage Encourages the Public to Help Erase Negative Stories at Its Sites, NPR (June 10, 2025, at 19:08 ET), https://www.npr.org/2025/06/10/nx-s1-5429773/national-park-service-signs [https://perma.cc/X44J-YLFZ].
Exec. Order No. 14,190, 90 Fed. Reg. 8853 (Jan. 29, 2025).
Exec. Order No. 14,185, 90 Fed. Reg. 8763 (Jan. 27, 2025).
Lolita C. Baldor, Pentagon Orders Military to Pull Library Books About Diversity, Anti-Racism, Gender Issues, PBS News (May 9, 2025, at 17:37 ET), https://www.pbs.org/newshour/politics/pentagon-orders-military-to-pull-library-books-about-diversity-anti-racism-gender-issues [https://perma.cc/YBW7-SD97].
John Ismay, Who’s In and Who’s Out at the Naval Academy’s Library?, N.Y. Times (Apr. 11, 2025). https://www.nytimes.com/2025/04/11/us/politics/naval-academy-banned-books.html [https://perma.cc/WLU4-LLYJ].
Lolita C. Baldor, Most Books Pulled from Naval Academy Library are Back on Shelves in Latest DEI Turn, Military.com (May 21, 2025, at 18:57 EST), https://apnews.com/article/military-libraries-dei-book-purge-d6df4f5c82d92763f2060d7f4b99cd95 [https://perma.cc/5YM5-RT9N] (reporting that most removed titles were returned after review while underscoring the vague and problematic nature of centralized ideological scrutiny of library holdings).
Jeff Passan, Defense Dept. Restores Story on Jackie Robinson’s Military Service, ESPN (Mar. 19, 2025, at 13:02 ET), https://www.espn.com/mlb/story/_/id/44316899/defense-department-removes-story-robinson-military-service [https://perma.cc/R23M-CDA5]; Leah Willingham, Jennifer Sinco Kelleher, & Tara Copp, Pentagon Restores Some Webpages Honoring Minority Service Members but Defends DEI Purge, PBS (Mar. 18, 2025, at 14:15 EST), https://www.pbs.org/newshour/politics/pentagon-restores-some-webpages-honoring-minority-service-members-but-defends-dei-purge [https://perma.cc/M2AW-NJG5] (describing the Department of Defense’s removal of “[t]housands of pages honoring contributions by women and minority groups,” though some were later restored).
Exec. Order No. 14,224, 90 Fed. Reg. 11363 (Mar. 1, 2025); see Gabe Gutierrez & Rebecca Shabad, Trump Signs an Executive Order Making English the Official U.S. Language, NBC, https://www.nbcnews.com/politics/donald-trump/trump-sign-executive-order-making-english-official-us-language-rcna194210 [https://perma.cc/G87L-VYGM] (last updated Mar. 1, 2025, at 19:25 ET).
Exec. Order No. 13,166, 65 Fed. Reg. 50121 (Aug. 11, 2000); Know Your Rights: Executive Order Threatens Access to Federal Programs, Legal Aid Found. of L.A. (Mar. 12, 2025), https://lafla.org/stories-events/know-your-rights-executive-order-threatens-access-to-federal-programs/ [https://perma.cc/H4L6-J3YS].
Exec. Order No. 14,224.
Memorandum from U.S. Dep’t of Just. to all federal agencies, Implementation of Exec. Order No. 14,224: Designating English as the Official Language of the United States of America 1–4 (July 14, 2025), https://www.justice.gov/ag/media/1407776/dl [https://perma.cc/9SDM-7HPZ].
Ileana Najarro, Trump Admin. Quietly Rescinds Guidance on English Learners’ Rights, Educ. Week (Aug. 20, 2025), https://www.edweek.org/teaching-learning/trump-admin-quietly-rescinds-guidance-on-english-learners-rights/2025/08 [https://perma.cc/K5F7-L3WY].
See e.g., GSA Order OGR 2335.1B, Language Services Policy (Sep. 17, 2025), https://www.gsa.gov/directives/files?file=2025-09%2FOCR+2335.1B%2C+Language+Services+Policy+(Sept+2025).pdf [https://perma.cc/3JT5-3HBY] (rescinding GSA’s 2024 Language Access Plan, stating that limited English proficiency services in federally conducted programs are not required and limiting them to “mission-critical” circumstances, and asserting that Title VI does not impose a broad obligation to provide multilingual services).
See Heba Gowayed, Trump’s Obsession with Immigration Is Really an Obsession with Segregation, The Guardian (Feb. 12, 2025, at 07:00 ET), https://www.theguardian.com/us-news/commentisfree/2025/feb/12/trump-immigration-segregation-dei [https://perma.cc/S4KZ-ZZYK].
Exec. Order No. 14,160, 90 Fed. Reg. 8449 (Jan. 20, 2025). Several district courts enjoined the enforcement of Executive Order 14,160; however, the Supreme Court reversed the injunctions in Trump v. CASA, Inc., No. 24A884, slip op. (U.S. June 27, 2025); see Efrén C. Olivares, Analyzing the Supreme Court’s Dangerous Decision in Trump v. CASA, Nat’l Immigr. L. Ctr. (June 27, 2025), https://www.nilc.org/articles/analyzing-scotus-trump-v-casa/ [https://perma.cc/24Y8-F7VH].
See generally Wong Kim Ark, 169 U.S. 649 (rejecting prior, racially exclusionary interpretations of birthright citizenship). See also Exec. Order No. 14,169; Amy Howe, Supreme Court Agrees to Hear Trump’s Challenge to Birthright Citizenship, SCOTUSblog (Dec. 5, 2025), https://www.scotusblog.com/2025/12/supreme-court-agrees-to-hear-trumps-challenge-to-birthright-citizenship/ [https://perma.cc/Q2MW-PWFE] (reporting that the Supreme Court will hear argument in the spring on whether President Trump’s executive order contravenes the Fourteenth Amendment’s Citizenship Clause after lower courts blocked enforcement).
Exec. Order No. 14,163, 90 Fed. Reg. 8459 (Jan. 20, 2025).
Id.
U.S. Dep’t of State, Proposed Refugee Admissions for Fiscal Year 2025: Report to the Congress 53, 55–56 (2024) (reporting fiscal year (FY) 2023 actual and FY 2024 projected refugee numbers).
Exec. Order No. 14,204, 90 Fed. Reg. 9497 (Feb. 7, 2025); see Brian Bennett & Nik Popli, Trump Welcomes Planeload of White South Africans, While Shutting Out Other Refugees, Time (May 12, 2025, at 14:18 ET), https://time.com/7284895/south-african-refugees-landed-trump/ [https://perma.cc/RHJ4-DMXF]; see also Kali Holloway, The Real Reason Those White South Africans Are Here, The Nation (May 16, 2025), https://www.thenation.com/article/politics/white-south-african-refugees-trump/ [https://perma.cc/AX6F-5M5N] (criticizing the Trump Administration’s offer of refugee status to white South Africans as symbolically affirming white supremacy and prioritizing white grievance over global human need).
Presidential Determination No. 2025-13, Presidential Determination on Refugee Admissions for Fiscal Year 2026, 90 Fed. Reg. 49005 (Sep. 30, 2025), https://www.federalregister.gov/documents/2025/10/31/2025-19752/presidential-determination-on-refugee-admissions-for-fiscal-year-2026 [https://perma.cc/3TLH-D9P5]. See also Eric Bazail-Eimil, Trump Administration Slashes Number of Refugees, Prioritizes Afrikaners, Politico (Oct. 30, 2025, at 13:04 ET), https://www.politico.com/news/2025/10/30/trump-slashes-refugee-numbers-afrikaners-00630038 [https://perma.cc/89TZ-BKLX].
White House, National Security Strategy of the United States of America 25 (Nov. 2025), https://www.whitehouse.gov/wp-content/uploads/2025/12/2025-National-Security-Strategy.pdf [https://perma.cc/7X99-3BGM] (attributing Europe’s “loss of national identities and self-confidence” in part to “migration policies that are transforming the continent and creating strife”).
Noem v. Vasquez Perdomo, 146 S. Ct. 1, 3 (2025) (Kavanaugh, J., concurring).
Other policies that could be described as advancing an ethnonationalist agenda include Exec. Order No. 14,248, 90 Fed. Reg. 14005 (Mar. 25, 2025) (mandating proof of citizenship for voter registration, perpetuating a false narrative that immigrant populations are likely to commit electoral fraud); Exec. Order No. 14,148 (revoking nine Biden-era executive actions that created federal programs to advance racial equity); and Exec. Order No. 14,236, 90 Fed. Reg. 13037 (Mar. 14, 2025) (revoking an executive order mandating the reform of funding systems for Tribal Nations to fulfill federal trust and treaty responsibilities).
See generally Ryan Calo & Danielle K. Citron, The Automated Administrative State: A Crisis of Legitimacy, 70 Emory L.J. 797, 802 (2021) (asserting that executive agencies’ increasing delegation of administrative decision making to opaque automated systems undermines transparency and accountability, often without giving an opportunity for these decisions to be contested).
See e.g., January 2025 AI Order, supra note 2; 2025 Use Memo, supra note 16 (repealing and replacing 2024 Use Memo, supra note 16); 2025 Acquisition Memo, supra note 16 (repealing and replacing 2024 Acquisition Memo, supra note 16).
Consolidated Appropriations Act, 2021, Pub. L. No. 116-260, div. U, tit. I, § 104(a)(3), 134 Stat. 1182, 2288.
Exec. Order No. 13,960, Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government, 85 Fed. Reg. 78939 (Dec. 3, 2020).
See generally James S. Pearson, Defining Digital Authoritarianism, 37 Phil. & Tech. 1, 3–4 (2024) (emphasizing that digital authoritarianism can involve intentional exploitation of digital technologies to obtain authoritarian goals and situations without such intent); see also Danielle Keats Citron & Ari Ezra Waldman, Digital Authoritarianism, U. Chi. L. Rev. Online (Jun. 5, 2025), https://lawreview.uchicago.edu/online-archive/digital-authoritarianism [https://perma.cc/ZJ2P-5XJL] (describing how coordinated online abuse and false accusations chills speech in ways that mirror state-sponsored authoritarian tactics).
See generally Mo Gawdat, Mo Gawdat on the Rise of AI: What Makes AI Different from Traditional Software?, YouTube (Aug. 24, 2024), https://www.youtube.com/watch?v=WmL9-G4LZAs (explaining the process by which AI learns how to solve problems).
National Artificial Intelligence Initiative Act of 2020, 15 U.S.C. § 9401(3); see Nat’l Inst. of Standards & Tech., NIST AI 100-1, Artificial Intelligence Risk Management Framework (AI RMF 1.0) 1 (2023).
John S. McCain National Defense Authorization Act for Fiscal Year 2019, Pub. L. No. 115–232, § 238(g), 132 Stat. 1636 (2018) (defining “artificial intelligence” to include systems that perform tasks without significant human oversight).
Id. (also including systems that simulate human perception or cognition or use techniques like machine learning to approximate human tasks); The March 2024 OMB AI Use Memo supplemented 15 U.S.C. § 9401(3)’s outcomes-based definition of AI with § 238(g)’s more functional and inclusive definition, which allows it to capture more important use cases across agencies. 2024 Use Memo, supra note 16, at 26–27. This Article adopts this supplemented, broader definition that covers both 15 U.S.C. § 9401(3) and § 238(g), recognizing that it has been used across both the Trump and Biden Administrations and provides a stable basis for comparing the legal and policy frameworks they advanced.
2024 Use Memo, supra note 16, at 27.
See Gavin Abercrombie et al., Affirming the Scientific Consensus on the Existence of Bias and Discrimination in AI Systems, AI Bias Consensus, https://www.aibiasconsensus.org/ [https://perma.cc/3444-H7HD] (affirming that bias in AI systems is well documented).
See generally Julia Angwin et al., Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks., ProPublica (May 23, 2016), https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing [https://perma.cc/6AQ7-Z7C9] (discussing findings that risk assessment scores “may be injecting bias into the courts”); Dena F. Mujtaba & Nihar R. Mahapatra, Ethical Considerations in AI-Based Recruitment, Inst. of Elec. & Elecs. Eng’rs, 2019, at 1 (revealing how biases in the hiring process “can easily carry over to AI-based approaches through the data used to train the algorithm”); Ziad Obermeyer et al., Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations, 366 Sci. 447, 447 (2019) (suggesting “that the choice of convenient, seemingly effective proxies for ground truth can be an important source of algorithmic bias in many contexts”); Joy Buolamwini & Timnit Gebru, Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification, 81 Procs. Mach. Learning Rsch. 1, 2–9 (2018) (“[M]achine learning algorithms can discriminate based on classes like race and gender.”); Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism 1–14, 64–109 (2018) (discussing how search engines like Google reinforce negative stereotypes).
See Nico Grant & Kashmir Hill, Google’s Photo App Still Can’t Find Gorillas. And Neither Can Apple’s., N.Y. Times (May 22, 2023), https://www.nytimes.com/2023/05/22/technology/ai-photo-labels-google-apple.html [https://perma.cc/7VCR-94QL].
See, e.g., Joy Adowaa Buolamwini, Gender Shades: Intersectional Phenotypic and Demographic Evaluation of Face Datasets and Gender Classifiers (Aug. 10, 2017) (M.S. thesis, Massachusetts Institute of Technology).
See, e.g., Obermeyer et al., supra note 150, at 448 (finding that, due to racial bias in an algorithm determining patient need, Black patients were assigned the same risk score as healthier White patients).
See, e.g., Leon Yin, Davey Alba & Leonardo Nicoletti, OpenAI’s GPT Is a Recruiter’s Dream Tool. Tests Show There’s Racial Bias, Bloomberg (Mar. 7, 2024), https://www.bloomberg.com/graphics/2024-openai-gpt-hiring-racial-discrimination/?leadSource=uverify wall [https://perma.cc/MHQ7-TYG8] (finding that ChatGPT “systematically produces biases that disadvantage groups based on their names” in hiring).
Weidinger et al., supra note 20, at 24.
Cf. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major & Shmargaret Shmitchell, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 614 (2021 ACM Conf. on Fairness, Accountability, & Transparency, 2021); Kathleen Bartzen Culver & Douglas M. McLeod, “Anti-Riot” or “Anti-Protest” Legislation? Black Lives Matter, News Framing, and the Protest Paradigm, 4 Journalism & Media 216, 225–27 (2023) (finding that news coverage of state legislation to control Black Lives Matter protests more commonly addressed fighting crime than free expression and race).
Rishi Bommasani et al., On the Opportunities and Risks of Foundation Models, arXiv (July 12, 2022), https://arxiv.org/pdf/2108.07258 [https://perma.cc/AWM3-Y6E2].
See Danielle Keats Citron, The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age 68–69 (2022).
Id.
Id.
Id. at 21; Muhammad Ali et al., Discrimination Through Optimization: How Facebook’s Ad Delivery Can Lead to Skewed Outcomes, Proc. ACM on Hum.-Comput. Interaction, Nov. 2019, at 1, 13–22.
Julia Rhodes Davis et al., Advancing Racial Equity Through Technology Policy, AI Now Inst. (Sep. 28, 2023), https://ainowinstitute.org/wp-content/uploads/2023/09/AINOW-Racial-Equity-Report-Sept-2023.pdf [https://perma.cc/X4Y3-VMRE] (“Current tech regulatory efforts remain race-blind, which allows the tech sector to continue to perpetuate racial equity harms.”).
See Angwin et al., supra note 150 (reporting that a crime prediction algorithm routinely mislabeled Black defendants as higher risk than White defendants, leading to disparate bail and sentencing outcomes). See generally Andrew Lee Park, Injustice Ex Machina: Predictive Algorithms in Criminal Sentencing, 67 UCLA L. Rev. 632 (2019) (observing that risk-assessment tools amplify racial disparities by overestimating the recidivism risk of Black defendants).
Andreas Jungherr, Artificial Intelligence and Democracy: A Conceptual Framework, 9 Soc. Media + Soc’y 1, 7 (2023).
See Bommasani et al., supra note 157, at 130; see also Okidegbe, supra note 20, at 1710–11 (discussing use of algorithms that lead to political repression); Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code 87–90 (2019); Anjalie Field et al., A Survey of Race, Racism, and Anti-Racism in NLP, ACL Anthology 1905, 1905–25 (2021); Timnit Gebru, Race and Gender, in The Oxford Handbook of Ethics of AI 252–69 (Markus Dirk Dubber, Frank Pasquale & Sunit Das eds., 2021) (explaining how data-driven decision-making creates negative feedback loops).
See generally Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and Information (Harv. Univ. Press 2016) (explaining how search engines determine what reaches our awareness).
See Rida Qadri et al., Risks of Cultural Erasure in Large Language Models, arXiv (Jan. 2, 2025), https://arxiv.org/html/2501.01056v1 [https://perma.cc/2SMA-L89F] (finding cultural erasure through omission of cultural groups in LLMs).
See Shana Lynch, How AI Is Leaving Non-English Speakers Behind, Stan. Rep. (May 19, 2025), https://news.stanford.edu/stories/2025/05/digital-divide-ai-llms-exclusion-non-english-speakers-research [https://perma.cc/46R9-P7H3].
See Timothy Garton Ash, Pluralism Is the Lifeblood of a Genuine Democracy, George W. Bush Presidential Ctr. (Feb. 23, 2021), https://www.bushcenter.org/publications/pluralism-is-the-lifeblood-of-a-genuine-democracy [https://perma.cc/A8XJ-9FEV] (noting that democratic community depends on including diverse identities, rather than enforcing homogeneity).
See David Wallace Adams, Education for Extinction: American Indians and the Boarding School Experience, 1875–1928 108–117 (2d ed., 2020) (documenting how federal and missionary-run boarding schools imposed cultural assimilation and erased tribal identity).
See U.S. Comm’n on C.R., The Excluded Student: Educational Practices Affecting Mexican Americans in the Southwest, Report III of Mexican American Education Study (1972) (documenting corporal punishment against Latino students for speaking Spanish in school).
See, e.g., Black Hair Belongs: LDF’s Work to End Race-Based Hair Discrimination, NAACP Legal Def. & Educ. Fund (July 12, 2023), https://www.naacpldf.org/wp-content/uploads/2023-07-12-Black-Hair-Belongs-larger-5-1.pdf [https://perma.cc/3VH6-8JP6] (finding that Black women are 1.5 times more likely to be sent home from work due to hair texture or style than other employees); D. Wendy Greene, Title VII: What’s Hair (and Other Race-Based Characteristics) Got to Do With It?, 92 U. Colo. L. Rev. 1265, 1266 (2021) (asserting that an employer barring hairstyles associated with a particular racial group like cornrows, dreadlocks, or braids violates Title VII).
See Benjamin, supra note 165, at 5–8, 11–12 (arguing that technologies designed by dominant cultural actors encode their biases, institutionalizing existing hierarchies under the guise of perceived neutrality).
See From Diversity to Pluralism, Harv. Univ.: The Pluralism Project, https://pluralism.org/from-diversity-to-pluralism [https://perma.cc/Q9JH-UU48] (defining pluralism as “the engagement that creates a common society from . . . [cultural and religious] diversity”); Antonia Pantoja et al., Towards the Development of Theory: Cultural Pluralism Redefined, 4 W. Mich. U. J. Socio. & Soc. Welfare 125, 130 (defining pluralism as a system in which “individuals . . . are able to form and develop communities along the differences of race, age, sex, [etc.],” and in which these communities are “open systems”); Joel K. Goldstein, Justice Brandeis and Civic Duty in a Pluralistic Society, 33 Touro L. Rev. 105, 129–30 (2017) (“Citizens must perform their duties in a pluralistic society . . . so that public decisions would reflect the views and wisdom of the collective body.”).
See Louis D. Brandeis, Justice, True Americanism, Address at Faneuil Hall, Boston (July 5, 1915) (“America has believed that in differentiation, not in uniformity, lies the path of progress . . . and it has prospered.”).
See Lily Hong Zhang et al., Cultivating Pluralism in Algorithmic Monoculture: The Community Alignment Dataset (June 10, 2025), https://openreview.net/forum?id=4hbwVQ6OMd [https://perma.cc/S2N6-SAVP] (observing how AI systems average away difference and converge on homogenized outputs even when human values diverge).
Jon Kleinberg & Manish Raghavan, Algorithmic Monoculture and Social Welfare, 118 Proc. Nat’l Acad. Sci. 1 (2021) (demonstrating multiple decision-makers’ reliance on the same, highly accurate algorithm can decrease overall social welfare due to reduced diversity of approaches and convergence on homogenized rankings).
Danielle K. Citron & Frank Pasquale, The Scored Society: Due Process for Automated Predictions, 89 Wash. L. Rev. 1, 13 (2014).
See Lisa Messeri & M. J. Crockett, Artificial Intelligence and Illusions of Understanding in Scientific Research, 627 Nature 49, 53–56 (2024) (arguing that the proliferation of AI tools in scientific practice fosters “scientific monocultures”—a narrowing of research viewpoints that obscures epistemic diversity and reduces innovation).
See generally Sorensen et al., supra note 21 (discussing how designing AI systems with diverse values and perspectives remains an open research question); see also, e.g., Gordon et al., supra note 21 (introducing a machine learning approach called “jury learning” to resolve disagreement about ground truth labels).
Young Mie Kim, Uncover: Strategies and Tactics of Russian Interference in US Elections 9 (2018). See also Darin E.W. Johnson, Russian Election Interference and Race-Baiting, 9 Colum. J. Race & L. 191 (2019).
Brandi Collins-Dexter, Butterfly Attack: Operation Blaxit, The Media Manipulation Casebook (July 8, 2021), https://mediamanipulation.org/case-studies/butterfly-attack-operation-blaxit/ [https://perma.cc/79WA-6J2T].
Gretel Kahn, AI, Lies and Conspiracy Theories: How Latinos Became a Key Target for Misinformation in the U.S. Election, Reuters Inst. for the Study of Journalism (Mar. 25, 2024), https://reutersinstitute.politics.ox.ac.uk/news/ai-lies-and-conspiracy-theories-how-latinos-become-key-target-misinformation-us-election [https://perma.cc/LK97-EGSN].
Madeline North, Generative AI Is Trained on Just a Few of the World’s 7,000 Languages. Here’s Why That’s a Problem – and What’s Being Done About It, World Econ. F. 5, https://www.weforum.org/stories/2024/05/generative-ai-languages-llm/ [https://perma.cc/KZ7P-LWAT] (last updated Oct. 6. 2025).
See Ben M. Tappin et al., Quantifying the Potential Persuasive Returns to Political Microtargeting, 120 Proc. Nat’l Acad. Sci. 25 (2023).
Thomas H. Costello, Gordon Pennycook & David Rand, Durably Reducing Conspiracy Beliefs Through Dialogues with AI, 385 Science, Sep. 13, 2024, at 1.
Id. at 1, 7; see also Mary Phuong et al., Evaluating Frontier Models for Dangerous Capabilities, arXiv (Apr. 5, 2024), https://arxiv.org/pdf/2403.13793 [https://perma.cc/2PY3-6RJ5] (finding Gemini 1.0 models moderately persuasive in manipulating a person’s beliefs).
Costello et al., supra note 186, at 2–3.
Jared S. Moore, et al., Expressing Stigma and Inappropriate Responses Prevents LLMs from Safely Replacing Mental Health Providers, arXiv (Apr. 25, 2025), https://arxiv.org/pdf/2504.18412 [https://perma.cc/LEY7-GGNR] (indicating that “LLMs are designed to be compliant and sycophantic.”).
See Maggie Harrison Dupré, People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis,” Futurism (June 28, 2025, at 09:00 ET), https://futurism.com/commitment-jail-chatgpt-psychosis [https://perma.cc/EKH8-WHV7].
Miles Klee, He Had a Mental Breakdown Talking to ChatGPT. Then Police Killed Him, Rolling Stone (June 22, 2025), https://www.rollingstone.com/culture/culture-features/chatgpt-obsession-mental-breaktown-alex-taylor-suicide-1235368941/ [https://perma.cc/YDT5-VY4W].
Id.
Charlie Warzel & Matteo Wong, Elon Musk’s Grok is Calling for a New Holocaust, The Atl. (July 8, 2025), https://www.theatlantic.com/technology/archive/2025/07/grok-anti-semitic-tweets/683463/ [https://perma.cc/45T9-3A5H]; see also Scott Detrow, What Happened When Grok Praised Hitler, NPR (July 12, 2025, at 17:00 ET), https://www.npr.org/2025/07/12/nx-s1-5462850/what-happened-when-grok-praised-hitler [https://perma.cc/NZC2-KNJE].
See Warzel & Wong, supra note 193.
Matt Brown & David Klepper, Fake Images Made to Show Trump with Black Supporters Highlight Concerns Around AI and Elections, L.A. Times (Mar. 8, 2024, at 11:55 PT), https://www.latimes.com/world-nation/story/2024-03-08/fake-images-made-to-show-trump-with-black-supporters-highlight-concerns-around-ai-and-elections [https://perma.cc/DS4D-5SCS].
Cf. Iason Gabriel, Artificial Intelligence, Values, and Alignment, 30 Mind & Machs. 411, 425 (2020) (“Designing AI in accordance with a single moral doctrine would . . . involve imposing a set of values and judgments on other people who did not agree with them.”).
Artificial Intelligence Act, Regulation 2024/1689, 2024 O.J. (L 1689) 1, art. 5(1)(a) (EU) (banning AI applications that pose unacceptable risk, including subconscious behavioral manipulation).
See, e.g., General Data Protection Regulation, Regulation 2016/679, 2018 O.J. (L 127) 1, art. 5(1) (EU) (stating data collection should be “adequate, relevant, and limited to what is necessary” for the intended purpose).
Thomas Christiano, Algorithms, Manipulation, and Democracy, 52 Canadian J. Phil. 109, 109 (2022).
Exec. Order No. 14,148 (directing agencies to end federal implementation of “DEI ideology” and rescinding a list of Biden-era executive actions, including Exec. Order 14,110, 88 Fed. Reg. 75191 (Oct. 30, 2023) on AI and Exec. Order 14,091, 88 Fed. Reg. 10825 (Feb. 16, 2023) on racial equity).
See 2023 AI Order, supra note 1. The 2023 AI Order also addressed AI risks in critical infrastructure, cybersecurity, and national security.
Id. at § 2(d) (stating that the federal government “cannot—and will not—tolerate the use of AI to disadvantage those who are already too often denied equal opportunity and justice.”).
Id. at § 7.2(a).
See id. at § 10.1(b)(iv); see also 2024 Use Memo, supra note 16, at 15 (setting mandatory “minimum risk management practices” for rights- and safety-impacting AI in fulfillment of Exec. Order 14,110 § 10.1(b)(iv); ordering watermarking and other safeguards for generative AI, as required by §§ 10.1(b)(viii)(A)–(D)); 2024 Acquisition Memo, supra note 16, at 4 (companion acquisition memo extending the § 10.1(b)(iv) minimum-practice regime to AI procurement, squarely implementing § 10.1(b)(viii)(D)).
See 2023 AI Order, supra note 1, at § 7.1(a)(i) (directing the Attorney General to support agencies in enforcing existing laws to address AI-related); see also Press Release, Off. of Pub. Aff., U.S. Dep’t of Just., Five New Federal Agencies Join Justice Department in Pledge to Enforce Civil Rights Laws in Artificial Intelligence (Apr. 4, 2024) (announcing a six-agency enforcement coalition under Executive Order 14,110 § 7.1(a)(i)).
2023 AI Order, supra note 1, at § 7.1(a)(iii).
See id. at § 7.1(a)(ii); see also Press Release, U.S. Dep’t of Just., Readout of the Justice Department’s Interagency Convening on Advancing Equity in Artificial Intelligence (Jan. 11, 2024) (summarizing the first meeting of federal civil-rights offices to coordinate efforts against algorithmic discrimination mandated by Executive Order 14,110).
2023 AI Order, supra note 1, at § 8(a), 88 Fed. Reg. at 75214; see also Press Release, Consumer Fin. Prot. Bureau, (Oct. 24, 2024) (implementing Executive Order 14,110 § 8(a) and explaining that employers and vendor-supplied AI scoring tools must comply with Fair Credit Reporting Act accuracy, notice, and explainability duties, reinforcing transparency and bias-mitigation requirements); Press Release, Fed. Commc’n Comm’n, FCC Makes AI-Generated Voices in Robocalls Illegal, (Feb. 8, 2024) (extending Telephone Consumer Protection Act to voice-cloned calls, arming regulators and states to curb AI-enabled fraud and privacy abuses and requiring providers to police third-party AI tools, in accordance with Executive Order 14,110 § 8(a)).
2023 AI Order, supra note 1, at § 7.1(b); see also U.S. Dep’t of Just., Artificial Intelligence and Criminal Justice: Final Report (Dec. 3, 2024), https://www.justice.gov/olp/media/1381796/dl [https://perma.cc/366V-PJRP] (fulfilling Executive Order 14,110 § 7.1(b) by evaluating AI uses across the criminal judicial system, and proposing efficiency gains alongside civil-rights safeguards).
2023 AI Order, supra note 1, at § 8(d); U.S. Dep’t of Educ., Off. of Educ. Tech., Empowering Education Leaders: A Toolkit for Safe, Ethical, and Equitable AI Integration (Oct. 2024), https://files.eric.ed.gov/fulltext/ED661924.pdf (explaining risk-mitigation, privacy, civil-rights, and equity guardrails, fulfilling Executive Order 14,110 § 8(d)); U.S. Dep’t of Educ., Off. of Educ. Tech., Designing for Education with Artificial Intelligence: An Essential Guide for Developers (July 2024), https://files.eric.ed.gov/fulltext/ED661949.pdf [https://perma.cc/YV5K-5TMB] [hereinafter, Education with Artificial Intelligence] (sets design, evidence, safety, and civil-rights benchmarks to guide ed-tech developers in building trustworthy, student-centered AI, pursuant to Exec. Order 14,110 § 8(d)); U.S. Dep’t of Educ., Off. for C.R., Avoiding the Discriminatory Use of Artificial Intelligence (Nov. 2024), https://files.eric.ed.gov/fulltext/ED661946.pdf [https://perma.cc/3QEE-FDPK] (outlining Title VI, Title IX, and Section 504 scenarios to help schools remedy AI-driven discrimination, operationalizing Executive Order 14,110 § 8(d)’s nondiscrimination directive).
2023 AI Order, supra note 1, at § 7.3(a); see also U.S. Dep’t of Lab., Off. of Fed. Cont. Compliance Programs, Artificial Intelligence and Equal Employment Opportunity for Federal Contractors (2024), https://data.aclum.org/wp-content/uploads/2025/01/DOL_www_dol_gov_agencies_ofccp_ai_ai-eeo-guide.pdf [https://perma.cc/99JF-HU6F] (implementing Executive Order 14,110 § 7.3(a) by directing federal contractors to validate AI-based hiring tools, maintain human oversight, and routinely test for disparate impact).
2023 AI Order, supra note 1, at § 7.3(b); see Quality Control Standards for Automated Valuation Models, 89 Fed. Reg. 64538 (Aug. 7, 2024), https://www.govinfo.gov/content/pkg/FR-2024-08-07/pdf/2024-16197.pdf [https://perma.cc/AA72-G96C] (final inter-agency rule requiring bias testing for algorithmic home-appraisal systems, directly implementing § 7.3(b)(ii)).
2023 AI Order, supra note 1, at § 7.3(c)(i)–(ii); see also U.S. Dep’t of Hous. & Urb. Dev., Off. of Fair Hous. & Equal Opportunity, Guidance on Application of the Fair Housing Act to the Screening of Applicants for Rental Housing (2024), https://archives.hud.gov/news/2024/FHEO_Guidance_on_Screening_of_Applicants_for_Rental_Housing.pdf [https://perma.cc/2L9R-BY8T] (implementing Executive Order 14,110 § 7.3(c)(i) by explaining how AI-driven tenant-screening systems can produce unjustified disparate impacts and setting best-practices); U.S. Dep’t of Hous. & Urb. Dev., Off. of Fair Hous. & Equal Opportunity, Guidance on Application of the Fair Housing Act to the Advertising of Housing, Credit, and Other Real Estate-Related Transactions through Digital Platforms (2024), https://archives.hud.gov/news/2024/FHEO_Guidance_on_Advertising_through_Digital_Platforms.pdf [https://perma.cc/XD75-5ECT] (implementing Executive Order 14,110 § 7.3(c)(ii) and warning that algorithmic ad-targeting and delivery can steer or exclude protected classes and recommends auditing procedures).
2023 AI Order, supra note 1, at § 8(b)(i)(C); see also U.S. Dep’t of Health & Hum. Servs., Strategic Plan for the Use of Artificial Intelligence in Health, Human Services, and Public Health (2025) (fulfilling Executive Order 14,110 § 8(b)(i)(C) by embedding equity principles and bias-mitigation frameworks for AI throughout the sector).
2023 AI Order, supra note 1, at § 8(b)(iii); see also Nondiscrimination in Health Programs and Activities, 89 Fed. Reg. 37522, 37669–72 (May 6, 2024) (to be codified at 45 C.F.R. pt. 92) (implementing Exec. Order 14,110 § 8(b)(iii) by barring discrimination “through the use of patient care decision support tools,” requiring covered entities to make reasonable efforts to detect and mitigate AI-driven bias, and committing the Office of Civil Rights to provide technical assistance and enforcement); U.S. Dep’t of Health & Hum. Servs., Off. for C.R., Ensuring Nondiscrimination Through the Use of Artificial Intelligence and Other Emerging Technologies (2025) (implementing Executive Order 14,110 § 8(b)(iii) and offering technical assistance to federally funded healthcare providers on Section 1557 compliance when using AI patient care decision-support tools,).
2023 AI Order, supra note 1, at § 8(b)(iv); see also U.S. Dep’t of Health & Hum. Servs., Artificial Intelligence (AI) in Healthcare Safety Program – October 2024 Update (2024), https://pso.ahrq.gov/sites/default/files/wysiwyg/ai-healthcare-safety-program.pdf [https://perma.cc/LZ9G-PGQ2] (launching the AI in Healthcare Safety Program in fulfillment of Exec. Order 14,110 § 8(b)(iv)).
2023 AI Order, supra note 1, at §§ 7.2(b)(i)–(ii); see also U.S. Dep’t of Health & Hum. Servs., Plan for Promoting Responsible Use of Artificial Intelligence in Automated and Algorithmic Systems by State, Local, Tribal, and Territorial Governments in Public Benefit Administration (2024) (fulfilling Exec. Order 14,110 § 7.2(b)(i) by outlining standards to ensure equitable access and bias monitoring when states and localities deploy AI in HHS-funded benefit programs); U.S. Dep’t of Agric., Food & Nutrition Serv., Supplemental Nutrition Assistance Program — Use of Advanced Automation in SNAP (2025) (implementing Executive Order 14,110 § 7.2(b)(ii) and directing state SNAP agencies to seek USDA approval before deploying AI tools, guarantee human appeals, and monitor equity outcomes).
2023 AI Order, supra note 1, at § 4.5(a)–(b); see also Nat’l Inst. of Standards & Tech., NIST AI 100-4, Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency (2024) (surveying methods for authenticating and labeling AI-generated synthetic content in fulfillment of § 4.5(a) of Executive Order 14,110).
2023 AI Order, supra note 1, at § 4.5(c).
Id. at §§ 9(a)(i)–(iv).
See supra introductory paragraphs in Part III for a discussion of some of the shortcomings of the 2023 AI Order.
See January 2025 AI Order, supra note 2.
Id. at §§ 2, 5.
Id. at § 1.
Cf. Derrick Bell, Faces at the Bottom of the Well: The Permanence of Racism 158–94 (Basic Books 1992) (tale in which extraterrestrials offer the United States unparalleled wealth in exchange for the country’s entire Black population, and white Americans agree to the trade).
Exec. Order No. 14,319. The White House also released two other AI Executive Orders on July 23, 2025. See Exec. Order No. 14320, Promoting the Export of the American AI Technology Stack, 90 Fed. Reg. 35393 (July 23, 2025) (establishing the American AI Exports Program to promote and export full-stack U.S. AI technology packages); Exec. Order No. 14,318, Accelerating Federal Permitting of Data Center Infrastructure, 90 Fed. Reg. 35385 (July 28, 2025) (streamlining federal environmental reviews and permitting for large-scale AI data center projects).
Exec. Order No. 14,319.
Id. at § 1.
The July 2025 AI Order uses vague terms like “trustworthy AI” and “ideological neutrality” that are inadequately defined and likely void for vagueness. Id. at § 3. While the provision is likely legally unenforceable, it operates as an intimidation tactic that deters aspiring federal contractors from fine tuning their models to prevent bias.
See supra Part II.A.1.
Noble, supra note 150; Safiya Noble, Google Has a Striking History of Bias Against Black Girls, Time (Mar. 26, 2018, at 16:30 ET), https://time.com/5209144/google-search-engine-algorithm-bias-racism/ [https://perma.cc/77FK-BERX].
Charlotte Colombo, TikTok Has Apologized for a ‘Significant Error’ After a Video That Suggested Racial Bias in Its Algorithm Went Viral, Bus. Insider (July 8, 2021, at 13:28 ET), https://www.businessinsider.com/tiktok-racism-algorithm-apology-creator-marketplace-ziggy-tyler-2021-7 [https://perma.cc/L6PK-DNP9].
Jacob Snow, Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots, ACLU (July 26, 2018), https://www.aclu.org/news/privacy-technology/amazons-face-recognition-falsely-matched-28 [https://perma.cc/JLX8-EUJG].
Leonardo Nicoletti & Dina Bass, Humans Are Biased. Generative AI Is Even Worse, Bloomberg (June 9, 2023), https://www.bloomberg.com/graphics/2023-generative-ai-bias/ [https://perma.cc/2E8H-KHW8] (examining how generative AI systems can amplify racial and gender bias “to extremes—worse than those found in the real world”).
See generally Valentin Hofmann et al., AI Generates Covertly Racist Decisions About People Based on Their Dialect, 633 Nature 147 (Aug. 28, 2024) (explaining racist generative AI racist decisions).
Prabhakar Raghavan, Gemini Image Generation Got It Wrong. We’ll Do Better., Google (Feb. 23, 2024), https://blog.google/products/gemini/gemini-image-generation-issue/ [https://perma.cc/YHD8-CXK7] (explaining Gemini’s inaccurate depictions stemmed from overly broad tuning for diversity that failed to distinguish between general and context-specific prompts).
Training data, system design, and other factors all involve values and choices. Jillian Fisher et al., Political Neutrality in AI Is Impossible—But Here Is How to Approximate It, arXiv (June 3, 2025), https://arxiv.org/abs/2503.05728 [https://perma.cc/WV98-AB74] (arguing that politically neutral AI is unattainable but proposing practical strategies to approximate neutrality); Catherine Stinson, Algorithms Are Not Neutral: Bias in Collaborative Filtering, 2 AI & Ethics 763, 767 (2022) (explaining that even when a data set is “unbiased,” “homogenizing biases” in algorithms can lead to discriminatory outcomes).
J.B. Branch, Ilana Beller & Tyson Slocum, The Trump AI Action Plan Is Deregulation Framed as Innovation, Tech Pol’y Press (July 30, 2025).
Exec. Off. of the Pres., Off. of Sci. & Tech. Pol’y, Winning the AI Race: America’s AI Action Plan (July 2025), https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf [https://perma.cc/M542-GB4D] [hereinafter America’s AI Action Plan]. The AI Action Plan also proposed denying federal AI funding to States “with burdensome AI regulations” deemed “unduly restrictive to innovation.” Id. at 3.
Id. at 4.
Id. at 3. See Exec. Order No. 14,365; see also infra Part II.B.4.
Migration Pol’y Inst., United States Language & Education (2023), https://www.migrationpolicy.org/data/state-profiles/state/language/US [https://perma.cc/Y748-PX69] (detailing demographic data about limited English proficient population in U.S.).
See supra Part II.A.1.
America’s AI Action Plan, supra note 239, at 2, 4.
Id. at 2.
Exec. Order No. 14,365.
Id. at § 1.
Id. at §§ 1 & 2.
Id. at § 1.
See Leah Frazier, How Trump’s AI Executive Order Gets It Wrong on Civil Rights, Tech Pol’y. Press (Dec. 19, 2025), https://www.techpolicy.press/how-trumps-ai-executive-order-gets-it-wrong-on-civil-rights/ [https://perma.cc/WCH6-HSMN] (“The convoluted and forced line of reasoning that the order uses . . . demonstrates that its attack on civil rights protections seems to be more of a vehicle for the president to expand his assault on disparate impact liability than it is about AI regulation.”); see also infra Part II.C.3.
Frazier, supra note 250.
Id.; Consumer Protections for Artificial Intelligence, Colo. Rev. Stat. § 6-1-1704 (2024).
Exec. Order No. 14,365, at § 2.
Id. at § 3.
Id. at §§ 4 & 5(a) (directing the Secretary of Commerce to identify and publish onerous State AI laws and issue a policy notice providing “that States with onerous AI laws” are ineligible for remaining funding under the “Broadband Equity Access and Deployment (BEAD) Program.”).
Id. at § 5(b).
Id. at §§ 6 & 7.
See Moody v. NetChoice, LLC, 603 U.S. 707 (2024) (reviewing consolidated challenges to Florida and Texas statutes regulating large online platforms’ content-moderation practices and vacating and remanding for proper First Amendment analysis of the laws’ applications); Murthy v. Missouri, 603 U.S. 43 (2024) (holding that Missouri and Louisiana as well as individual plaintiffs lacked standing to pursue claims alleging that Biden Administration officials unlawfully coerced social-media platforms to remove or downrank content); H.B. 18, 88th Gen. Assemb., Reg. Sess. (Tex. 2023) (codified at Tex. Bus. & Com. Code §§ 509.001–.152) (the Securing Children Online Through Parental Empowerment Act, imposing requirements on covered digital service providers—including disclosures and restrictions aimed at protecting minors from harmful online content); S.B. 152, 65th Gen. Assemb., Reg. Sess. (Utah 2023) (codified at Utah Code §§ 13-63-101 to -501) (regulating social media companies by requiring age verification and parental consent for minors’ accounts and authorizing state enforcement for certain harms to minors). See also Andrew Atterbury, DeSantis: Trump’s AI Order “Can’t Preempt” States from Taking Action, Politico (Dec. 8, 2025, at 04:17 ET), https://www.politico.com/news/2025/12/08/desantis-trump-ai-order-states-action-00681301 [https://perma.cc/Z9C3-3HME] (reporting Florida Governor DeSantis’s comments that an executive order cannot preempt state AI regulation and that only Congress can do so, and that Florida is pursuing its own AI consumer protections despite federal policy).
Justin Hendrix & Cristiano Lima-Strong, US House Passes 10-Year Moratorium on State AI Laws, Tech Pol’y Press (May 22, 2025), https://www.techpolicy.press/us-house-passes-10year-moratorium-on-state-ai-laws/ [https://perma.cc/5UGC-2PDC]; Cecilia Kang, Defeat of a 10-Year Ban on State A.I. Laws Is a Blow to Tech Industry, N.Y. Times (July 1, 2025), https://www.nytimes.com/2025/07/01/us/politics/state-ai-laws.html [https://perma.cc/ZNY7-3AHK].
A.B. 331, 2023–24 Leg., Reg. Sess. (Cal. 2023) (requiring impact assessments for “automated decision tools” and prohibiting algorithmic discrimination in certain areas); Colo. Rev. Stat. Ann. §§ 6-1-1701 to -1707 (West 2024) (same).
See Press Release, Lawyers’ Comm. for C.R. Under L., Leading Civil Rights Organizations Respond to Executive Order Seeking to Bar States from Addressing Harms Caused by Artificial Intelligence (Dec. 12, 2025), https://www.lawyerscommittee.org/leading-civil-rights-organizations-respond-to-executive-order-seeking-to-bar-states-from-addressing-harms-caused-by-artificial-intelligence/ [https://perma.cc/BW8S-KG3D] (observing that Executive Order 14,365 punishes states for protecting residents from discriminatory AI and including statements in opposition to the Order from eight civil-rights organizations); Press Release, Nat’l Fair Hous. All., NFHA Denounces Harmful White House Order Attacking State Protections from AI Harms (Dec. 15, 2025), https://nationalfairhousing.org/nfha-denounces-harmful-white-house-order-attacking-state-protections-from-ai-harms/ [https://perma.cc/9EET-YXCU] (asserting that Executive Order 14,365 would “restrain states from using fair housing laws to protect residents,” leaving them “without protections against biased or harmful AI systems”).
See e.g., January 2025 AI Order, supra note 2, at § 2 (instructing the Director of OMB to revise OMB AI guidance to conform to Section 2 of the Order, which establishes that “[i]t is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”). Under the AI in Government Act of 2020, the OMB is required to issue guidance to federal agencies on the use of artificial intelligence. See AI in Government Act of 2020, 40 U.S.C. § 11301 (2025) [hereinafter AI in Government Act of 2020].
Vought, 2025 Use Memo, supra note 16 (rescinding and replacing 2024 Use Memo, supra note 16).
See id. at 4 (“This memorandum provides guidance to agencies on how to innovate and promote the responsible adoption, use, and continued development of AI, while ensuring appropriate safeguards are in place to protect privacy, civil rights, and civil liberties, and to mitigate any unlawful discrimination . . . .”).
Both Memoranda also presume that an AI use is “high-impact,” or “rights-impacting,” if it provides language translation when responses are legally binding or for interactions that directly inform agency decisions or actions. Id. at 21–22; 2024 Use Memo, supra note 16, at 32–33. Neither Memo specifically addresses language equity or the rights of limited English proficient individuals. See Spencer Overton, Analyzing the Benefits of Artificial Intelligence to Racially Inclusive Democracy, 2026 Utah. L. Rev. 1, 23–26 (forthcoming 2026), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5196382 [https://perma.cc/CV3J-WVCC] (discussing the promises and shortcomings of AI translation tools).
Vought, 2025 Use Memo, supra note 16, at 19 (defining “High-Impact AI”); Young, 2024 Use Memo, supra note 16, at 29 (defining “Rights-Impacting AI”).
Young, 2024 Use Memo, supra note 16, at 29.
Vought, 2025 Use Memo, supra note 16, at 19; Young, 2024 Use Memo, supra note 16, at 29.
Young, 2024 Use Memo, supra note 16, at 29.
Vought, 2025 Use Memo, supra note 16, at 15–16; Young, 2024 Use Memo, supra note 16, at 17.
Vought, 2025 Use Memo, supra note 16, at 15–16; Young, 2024 Use Memo, supra note 16, at 17–18.
Vought, 2025 Use Memo, supra note 16, at 17; Young, 2024 Use Memo, supra note 16, at 19–20, 23–24.
Vought, 2025 Use Memo, supra note 16, at 17; Young, 2024 Use Memo, supra note 16, at 22.
Young, 2024 Use Memo, supra note 16, at 21.
Id.
Id. at 22.
Id. at 24. See generally Aziz Z. Huq, A Right to a Human Decision, 106 Va. L. Rev. 611, 615–18 (2020) (discussing State decisions to opt out of AI-enabled decisions in favor of human review).
Young, 2024 Use Memo, supra note 16, at 21, 23.
Vought, 2025 Acquisition Memo, supra note 16, at 1 (repealing and replacing 2024 Acquisition Memo, supra note 16). Like the 2025 Use Memo, the 2025 Acquisition Memo largely tracks statutory requirements about OMB guidance and thus uses more measured language with regard to civil rights, risks, and safeguards than Executive Order 14,179.
Id. at 5; Young, 2024 Acquisition Memo, supra note 16, at 9–10.
Vought, 2025 Acquisition Memo, supra note 16, at 6; Young, 2024 Acquisition Memo, supra note 16, at 9.
Vought, 2025 Acquisition Memo, supra note 16, at 7–9; Young, 2024 Acquisition Memo, supra note 16, at 4–5.
Vought, 2025 Acquisition Memo, supra note 16, at 6, 11; Young, 2024 Acquisition Memo, supra note 16, at 9–11.
Young, 2024 Acquisition Memo, supra note 16, at 9–11.
Id. at 11.
Id. at 15.
Id. at 16.
The 2024 Acquisition Memo also acknowledged the economic benefits of diversity among AI vendors, and other Biden-era supplier diversity directives applied to procurement generally (including AI). See Young, 2024 Acquisition Memo, supra note 16, at 22; Memorandum from Jason S. Miller, Deputy Dir. for Mgmt., Off. of Mgmt. & Budget, Exec. Off. of the President, to The Heads of Executive Departments and Agencies (Dec. 2, 2021), https://www.whitehouse.gov/wp-content/uploads/2021/12/M-22-03.pdf [https://perma.cc/N6HM-6EHS] (establishing plan to increase share of contracts awarded to small and disadvantaged businesses to fifteen percent by FY 2025); Memorandum from Jason S. Miller, Off. of Mgmt. & Budget, Exec. Off. of the President, to the heads of executive departments and agencies (Feb. 17, 2023), https://www.whitehouse.gov/wp-content/uploads/2023/02/M-23-11-Creating-a-More-Diverse-and-Resilient-Federal-Marketplace.pdf [https://perma.cc/452T-T6FR] (building on M-22-03 to help agencies track their progress in recruiting and retaining new entrants to the Federal marketplace). The second Trump Administration, however, terminated each of these provisions. Exec. Order No. 14,281 at §§ 1, 2, 4, 5.
See Young, 2024 Acquisition Memo, supra note 16, at 21. Real harms can result due to the removal of guardrails to ensure the safety of generative AI. See Jeremy Bearer-Friend & Sarah Polcz, Sharing the Algorithm: The Tax Solution to Generative AI, 17 Colum. J. Tax. L. 1, 11–12 (2025) (arguing that algorithmic bias in generative AI tools such as chatbots can create racially discriminatory and offensive content that “normalize[s] bias at scale” given the prevalence of AI generated content).
Memorandum from Russell T. Vought, Dir., Off. of Mgmt. & Budget, Exec. Off. of the President, to the heads of executive departments and agencies (Dec. 11, 2025), https://www.whitehouse.gov/wp-content/uploads/2025/12/M-26-04-Increasing-Public-Trust-in-Artificial-Intelligence-Through-Unbiased-AI-Principles-1.pdf [https://perma.cc/R6L4-USZT] [hereinafter Increasing Public Trust Memo].
Exec. Order No. 14,319.
Increasing Public Trust Memo, supra note 290.
Id. at 3.
See e.g., Amy B. Wang, Justice Department Sues Georgia County as Trump Pushes Debunked 2020 Election Fraud Claims, Wash. Post (Dec. 12, 2025), https://www.washingtonpost.com/politics/2025/12/12/justice-department-lawsuit-fulton-county-2020-election/ [https://perma.cc/R5Z5-AQG2] (reporting DOJ’s lawsuit against Georgia election officials and the Trump Administration’s continued claims of pervasive 2020 voter fraud despite the repeated rejection of these by courts and election officials).
See Darlene Superville, Trump Executive Order on Smithsonian Targets Funding for Programs with “Improper Ideology,” Associated Press (Mar. 27, 2025, at 21:45 ET), https://apnews.com/article/trump-smithsonian-executive-order-improper-ideology-558ebfab722f603e94e02a1a4b06ed4d [https://perma.cc/7KE2-8J9E] (reporting that the Trump Administration’s March 2025 executive order on the Smithsonian seeks to target and eliminate programs that it believes push “divisive narratives” and “improper ideology,” including race-related historical content the Administration portrays as ideologically driven). See also discussion supra Section I.B.
U.S. Equal Emp. Opportunity Comm’n, Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964, at 4–5 (May 18, 2023), https://data.aclum.org/storage/2025/01/EOCC_www_eeoc_gov_laws_guidance_select-issues-assessing-adverse-impact-software-algorithms-and-artificial.pdf [https://perma.cc/F7SF-484J] (advising employers to validate models, test for bias, and oversee third-party vendors to avoid Title VII disparate impact liability); Gone but Not Forgotten: Federal Laws Still Apply Despite AI Guidance Disappearance Act, Cooley (Feb. 21, 2025), https://www.cooley.com/news/insight/2025/2025-02-21-gone-but-not-forgotten-federal-laws-still-apply-despite-guidance-disappearance-act [https://perma.cc/JF8Y-28R8] (listing and analyzing EEOC AI documents removed after Andrea Lucas became acting chair); Patrick Thibodeau, Trump Actions Could Undermine EEOC AI Bias Efforts, TechTarget (Feb. 13, 2025), https://www.techtarget.com/searchhrsoftware/news/366619335/Trump-actions-could-undermine-EEOC-AI-bias-efforts [https://perma.cc/ZAY5-XQF7] (reporting that the Trump Administration “removed key documents, including AI guidance for employers,” from the EEOC site and warning that states may step in to regulate algorithmic discrimination).
U.S. Equal Emp. Opportunity Comm’n, supra note 296, at 6.
Id. at 8.
See Off. of Fed. Cont. Compliance Programs, supra note 211, at 1–2 (implementing Executive Order 14,110 § 7.3(a) and directing federal contractors to validate AI-based hiring tools, maintain human oversight, document and audit algorithms, and routinely test for disparate impact to protect equity and reduce bias); Michelle Capezza et al., Artificial Intelligence Executive Order: Workplace Implications, Mintz Insights (Feb. 12, 2025), https://www.mintz.com/insights-center/viewpoints/2226/2025-02-12-artificial-intelligence-executive-order-workplace [https://perma.cc/6LLF-UQLC] (explaining that OFCCP’s AI-hiring guide vanished after EO 14,110 was rescinded and advising contractors to continue testing AI tools for equity and bias).
Off. of Fed. Cont. Compliance Programs, supra note 211, at 2; Kathleen D. Parker, Erinn L. Rigney, Ninamarie C. Moore & Isabella F. Sparhawk, The Changing Landscape of AI: Federal Guidance for Employers Reverses Course with New Administration, K&L Gates Hub (Jan. 31, 2025), https://www.klgates.com/The-Changing-Landscape-of-AI-Federal-Guidance-for-Employers-Reverses-Course-with-New-Administration-1-31-2025 [https://perma.cc/84Q3-75N4] (reporting that President Trump’s revocation of Executive Order 14,110 prompted DOL/OFCCP to withdraw its AI-hiring guidance, stripping explicit safeguards against algorithmic bias and disparate-impact discrimination in federal-contractor hiring).
Quality Control Standards for Automated Valuation Models, 88 Fed. Reg. 40638 (June 21, 2023).
Letter from Jeffrey D. Little, Gen. Deputy Assistant Secretary for Hous., U.S. Dep’t of Hous. & Urb. Dev., to all FHA-Approved Mortgagees et al., (Mar. 19, 2025) (withdrawing ML 2021-27 and ML 2024-07 and abandoning FHA’s standardized bias-review framework); see also Jonathan Delozier, HUD Rescinds Appraisal Review Policies: Standardized Federal ROV Policy had Viewed as a Tool to Combat Appraisal Bias, HousingWire (Mar. 20, 2025, at 17:24 ET), https://www.housingwire.com/articles/hud-rescinds-reconsideration-of-value-appraisal-review-policies/ [https://perma.cc/4EP6-STDY] (describing compliance implications of ML 2025-08 and confirming that the rollback “removes a layer of oversight aimed at combating appraisal bias,” signaling to lenders that fair-housing scrutiny of algorithmic valuations is no longer expected).
Letter from Jeffrey D. Little, Gen. Deputy Assistant Secretary for Hous., U.S. Dep’t of Hous. & Urb. Dev., to all FHA-Approved Mortgagees et al. (June 27, 2025) (striking appraisal-reporting fields, reducing data available to detect potential valuation bias); see Dan Bradley, HUD Announces Changes to FHA Appraisal Requirements, McKissock Learning (June 30, 2025), https://www.mckissock.com/blog/appraisal/hud-announces-changes-to-fha-appraisal-requirements/ [https://perma.cc/3TXF-9R9C] (summarizing ML 2025-18’s deletions).
Interpretive Rules, Policy Statements, and Advisory Opinions; Withdrawal, 90 Fed. Reg. 20084, 20085–87 (May 12, 2025) (listing each rescinded item).
Consumer Financial Protection Circular 2022-03: Adverse Action Notification Requirements in Connection with Credit Decisions Based on Complex Algorithms, 87 Fed. Reg. 35864, 35865 (June 14, 2022).
Alice S. Hrdy et al., CFPB Revokes Guidance in Sweeping Rollback of Agency Policies and Priorities, Morgan Lewis & Bockius LLP (June 4, 2025), https://www.morganlewis.com/pubs/2025/06/cfpb-revokes-guidance-in-sweeping-rollback-of-agency-policies-and-priorities [https://perma.cc/T4MZ-T6F2]; Douglas Gillison, US Consumer Watchdog to Scrap Scores of Financial Oversight Policies Issued Since 2011, Reuters (May 9, 2025, at 16:44 ET), https://www.reuters.com/sustainability/boards-policy-regulation/us-consumer-watchdog-scrap-scores-financial-oversight-policies-issued-since-2011-2025-05-09/ [https://perma.cc/EF6L-X7FE].
Mariam Baksh, Civil Society Groups Remain in NIST’s AI Consortium Despite New Agreement, Stresing Safety Needs, Inside AI Pol’y (Apr. 4, 2025), https://insideaipolicy.com/share/17923 [https://perma.cc/J3Y5-BC6X] (discussing the removal of terms related to AI safety); Knight, supra note 17 (reporting White House pressure to purge fairness language).
Anthony Kimery, As Trump’s AI Deregulation, Job Cuts Sink In, Industry Gets Spooked, Biometric Update (Mar. 17, 2025, at 19:24 ET), https://www.biometricupdate.com/202503/as-trumps-ai-deregulation-job-cuts-sink-in-industry-gets-spooked [https://perma.cc/NB6K-YXJD].
See Knight, supra note 17.
Anthony Ha, U.S. AI Safety Institute Could Face Big Cuts, TechCrunch (Feb. 22, 2025, at 13:22 PST), https://techcrunch.com/2025/02/22/us-ai-safety-institute-could-face-big-cuts/ [https://perma.cc/EVM2-TBYJ] (noting NIST “could fire as many as 500 staffers,” many of them at the AI Safety Institute).
See Madison Alder, Trump Administration Rebrands AI Safety Institute, FedScoop (June 4, 2025), https://fedscoop.com/trump-administration-rebrands-ai-safety-institute-aisi-caisi/ [https://perma.cc/D3A2-9XAH] (observing that the “new name signals a shift away from the term ‘safety’ and toward a desire for rapid development of the technology”).
See, e.g., U.S. Dep’t of Just., Artificial Intelligence and Civil Rights (Feb. 3, 2025), https://www.justice.gov/archives/crt/ai [https://perma.cc/E25M-V8HX] (archived content disclaimer noting that “[t]his is archived content from the U.S. Department of Justice website. The information here may be outdated . . . .”); Education with Artificial Intelligence, supra note 210 (guidance hosted on ERIC, a Department of Education archival site).
See Request for Nominations for Members to Serve on National Institute of Standards and Technology and National Technical Information Service Federal Advisory Committees, 90 Fed. Reg. 16501, 16506 (Apr. 18, 2025) (citing National Artificial Intelligence Initiative Act provision requiring the advisory subcommittee to assess “[b]ias, including whether the use of facial recognition by government authorities . . . is taking into account ethical considerations and addressing whether such use should be subject to additional oversight, controls, and limitations”).
Exec. Order No. 14,281 (directing all federal agencies to repeal, revise, or avoid the use of disparate impact liability to the maximum extent permitted by law); Exec. Order No. 14,148 (repealing Executive Order 14,110, which, in § 10.1(b)(iv), directed the OMB to issue guidance specifying minimum risk-management practices for agency uses of AI); Vought, 2025 Use Memo, supra note 16 (rescinding and replacing 2024 Use Memo, supra note 16), which required mitigating “significant disparities . . . across demographic groups” in AI outputs). But see Chiraag Bains, When Machines Discriminate: The Critical Role of Disparate Impact in AI Accountability, Ctr. for C.R. & Tech. (2026), https://civilrights.org/wp-content/uploads/2026/01/SNAPSHOT-When-Machines-Discriminate_The-Critical-Role-of-Disparate-Impact-in-AI-Accountability.pdf [https://perma.cc/XR32-4WA3], at 39–42 (asserting that the Trump Administration’s attack on disparate impact rests on at least three legal errors: 1) misstating the doctrine as imposing liability for “any” outcome disparities, despite “significant” disparity and burden-shifting requirements; 2) conflating disparate impact with quotas or racial balancing, even though the doctrine does not require parity and Title VII bans quotas; and 3) claiming disparate impact is unconstitutional contrary to doctrine establishing the constitutionality of measures that are race-neutral in form but designed to create equal opportunity for people of color).
Chiraag Bains, The Legal Doctrine That Will Be Key to Preventing AI Discrimination, Brookings (Sep. 13, 2024), https://www.brookings.edu/articles/the-legal-doctrine-that-will-be-key-to-preventing-ai-discrimination/ [https://perma.cc/4WFB-2ZZK] (observing that “[d]isparate impact liability is all the more crucial because today’s transformer-based AI models are still black boxes. The developers . . . don’t understand exactly how the models produce sophisticated and creative answers. The complexity and opacity of the systems inner workings mean that we won’t know whether or how they are considering protected characteristics.”).
See id. (asserting that intent-based antidiscrimination frameworks are ill-suited to algorithmic harms because they typically arise from data, design, and deployment choices rather than conscious animus, that disparate impact doctrine is therefore the most effective legal tool for detecting and addressing AI bias, and proposing “a new disparate impact law that covers any use of AI that impacts people’s rights and opportunities”); Barocas & Selbst, supra note 28, at 701 (“Where there is no discriminatory intent, disparate impact doctrine should be better suited to finding liability for discrimination in data mining.”).
Rachel Levinson-Walman & José Guillermo Gutiérrez, DHS Must Overhaul Its Flawed Automated Systems, Brennan Ctr. for Just. (Oct. 24, 2023), https://www.brennancenter.org/our-work/analysis-opinion/dhs-must-overhaul-its-flawed-automated-systems [https://perma.cc/C332-R2PF] (noting that CBP’s Automated Targeting System is an “algorithmically powered analytical database” that creates risk profiles used to trigger additional government scrutiny).
See AI & Homeland Security, supra note 8; TSA Secure Flight Program, supra note 8.
See Loricchio, supra note 9 (reviewing IRS use of AI and analytics and reporting on study finding that IRS algorithms resulted in Black taxpayers who claimed the Earned Income Tax Credit were three to five times more likely to be audited than non-Black taxpayers). Racial bias in IRS enforcement is not confined to audit selection, and the adoption of AI could compound the bias already baked into many other IRS enforcement procedures, including summonses, civil penalties, appeals, collection due process, and criminal tax referrals to the Department of Justice. See Jeremy Bearer-Friend, Colorblind Tax Enforcement, 97 NYU L. Rev. 1, 27–47 (2022) (documenting how facially race-neutral IRS enforcement mechanisms produce racially disparate outcomes, despite the IRS claim to the contrary, and identifying seven tax enforcement settings especially vulnerable to this bias).
See Bains, supra note 314 (explaining that algorithmic systems routinely generate discriminatory outcomes without explicit consideration of race, that proxy variables and data patterns make intent nearly impossible to prove, and that disparate impact analysis is often the only viable mechanism for identifying and remedying unjustified algorithmic disparities).
See generally Dave Owen & Gaby Salazar Kitner, Mapping Environmental Justice, Minn. L. Rev. (forthcoming 2026) (explaining the reactions of courts and government agencies to environmental-justice mapping).
Julia Rhodes Davis et al., supra note 162, at 3 (“Current tech regulatory efforts remain race-blind, which allows the tech sector to continue to perpetuate racial equity harms.”).
Barocas & Selbst, supra note 28, at 674.
Rescinding Portions of Department of Justice Title VI Regulations to Conform More Closely with the Statutory Text and to Implement Executive Order 14,281, 90 Fed. Reg. 57141 (Dec. 10, 2025), https://www.federalregister.gov/documents/2025/12/10/2025-22448/rescinding-portions-of-department-of-justice-title-vi-regulations-to-conform-more-closely-with-the [https://perma.cc/24KT-PFYX].
See id.
Letter from Kyle Hauptman, Chairman, Nat’l Credit Union Admin., to Federally Insured Unions, (Sep. 4, 2025), https://ncua.gov/regulation-supervision/letters-credit-unions-other-guidance/removal-disparate-impact [https://perma.cc/666S-9HBT].
Building on the work of Ryan Calo and Danielle Citron, this Article argues that restoring legitimacy in government AI requires not only structural reform, but explicit statutory commitments to fairness, pluralism, authenticity, and autonomy. See Calo & Citron, supra note 139, at 807, 835–45 (calling for structural reforms to restore democratic legitimacy in the administrative “automated state”).
See AI in Government Act of 2020, supra note 262.
See 2023 AI Order, supra note 1, at § 10.2.
See Mauni Jalali, Founders as Administrators: Historical Precedents for the Modern Regulatory State, by Mauni Jalali, Yale J. on Reg. Notice & Comment (Apr. 24, 2025) (noting that presidents throughout history, regardless of their political views, “respected [independent agencies commissioners’] statutory removal protections”).
See 2023 AI Order, supra note 1, at §§ 4.5(a)–(b).
See id. § 9(a).
See id. § 9.
2024 Acquisition Memo, supra note 16, n.42.
See AI in Government Act of 2020, supra note 262; 2024 Use Memo, supra note 16, at 29.
Promoting Responsible Evaluation and Procurement to Advance Readiness for Enterprise-wide Deployment for Artificial Intelligence (PREPARED for AI) Act, S. 4495, 118th Cong. (2024) [hereinafter PREPARED for AI Act] (establishing a comprehensive governance framework for AI procurement in federal agencies to minimize harms from high-risk AI); AI Leadership in Equitable Accountability and Development (AI LEAD) Act, H.R. 8756, 118th Cong. (2024) (requiring federal agencies to inventory AI use cases, classify risk levels, and ensure transparency, privacy, and nondiscrimination); Federal A.I. Governance and Transparency Act of 2024, H.R. 7532, 118th Cong. (2024) (proposing a comprehensive framework for federal agency oversight, procurement, and governance of AI; mandating transparency, safeguards for privacy and civil rights, risk assessments, and independent evaluations).
The Equitable AI Act’s requirements would likely be inserted into the Federal Acquisition Regulation’s part on the acquisition of information technology. See Acquisition of Information Technology, 48 C.F.R. pt. 39 (2024).
See Bains, supra note 314 (explaining that current disparate impact law is inadequate to address algorithmic discrimination, and proposing a federal statute that would prohibit discrimination—including disparate impact—in the deployment of AI that informs decisions that impacts people’s rights and opportunities in a broad range of contexts and includes a private right of action).
This language is similar to provisions in Executive Order No. 13,166 which requires federal agencies to take action to accommodate the needs of LEP persons. This order was revoked by the Trump Administration. Exec. Order No. 14,244, Designating English as the Official Language of the United States, 65 Fed. Reg. 50121 (Mar. 1, 2025).
A critical component of preventing covert manipulation is data protection, an expansive topic which is beyond of the scope of this Article. For important models of data protection, see American Privacy Rights Act of 2024, H.R. 8818, 118th Cong. (2024); Maryland Online Data Privacy Act of 2024, H.D. 567, 2024 Reg. Sess. (Md. 2024); California Consumer Privacy Act of 2018, Cal. Civ. Code §§ 1798.100–1798.199.100 (West 2024).
Cf. 2025 O.J. (L 1689) 1, art. 5, (prohibiting AI systems that deploy subliminal techniques or exploit vulnerabilities to materially distort behavior in ways that cause harm).
Cf. AI Civil Right Act of 2024, infra note 349, at § 2(3) (listing examples of “consequential action”).
Cf. id. § 102 (a)(1)(A) (requiring preliminary evaluations of plausible harm from an AI use).
Cf. id. § 102 (a)(1)(B)(i).
A similar two-step process is described in the AI Civil Right Act of 2024. Id. § 102 (a)(2) (detailing the process for a pre-deployment evaluation of an algorithm). Unlike the Equitable AI Act, the AI Civil Rights Act covers private entities, is ambiguous as to whether it applies to federal agencies and does not explicitly address language assistance and other homogenization issues, deepfakes, and behavioral manipulation. The AI Civil Rights Act is an instructive framework because it has been endorsed by over 80 civil rights groups. See Press Release, Senator Edward J. Markey, Senator Markey Celebrates 54 New Endorsements of His Comprehensive AI Civil Rights Legislation, (Nov. 21, 2024). The National Environmental Policy Act of 1969 also details a similar process. See 42 U.S.C. §§ 4332(C), 4336 (requiring federal agencies to first assess whether a proposed action is likely to significantly affect the environment and, if so, to prepare a detailed environmental impact statement).
Cf. AI Civil Right Act of 2024, infra note 349, at § 102 (a)(2) (detailing the process of a full pre-deployment evaluation); Chief Info. Officer, Cybersecurity Maturity Model Certification, U.S. Dep’t of War, https://dodcio.defense.gov/cmmc/About/ [https://perma.cc/WS8G-RF7P]. (describing a program to assess defense contractor cybersecurity compliance). See generally Pauline T. Kim, Auditing Algorithms for Discrimination, 166 U. Pa. L. Rev. Online 189 (2017) (stressing the importance of fairness audits and transparency in countering algorithmic discrimination).
This pre-deployment testing would be analogous to “first article” testing. See 48 C.F.R. § 9.3 (2024) (requiring certain contractors to submit a sample product for testing and approval before full-scale production).
PREPARED for AI Act, supra note 336, at § 7(d) (requiring “testing the artificial intelligence in an operational, real-world setting with privacy, security, civil rights, and civil liberty safeguards to . . . determine . . . the likelihood and impact of adverse outcomes occurring during use”).
Cf. Artificial Intelligence Civil Rights Act of 2024, S. 5152, 118th Cong. § 102(a)(2) (2024) [hereinafter AI Civil Right Act of 2024] (detailing the process of a full pre-deployment evaluation).
See Lawyers Comm. for C.R. Under L., Online Civil Rights Act: Model AI Bill, 15–16, (Dec. 2023) (providing a means of opting out of use of “covered algorithm with regard to a consequential action” and providing a human review alternative); AI Civil Right Act of 2024, supra note 349 (mandating notice and a process to appeal to a human reviewer when an algorithm has made a “consequential action”).
PREPARED for AI Act, supra note 336, at § 7(i)(1).
Cf. 2024 Use Memo, supra note 16, at 5 (requiring agencies to annually inventory all AI use cases and to report risks of inequitable outcomes and mitigation plans in safety and rights-impacting AI systems).
See Sonia K. Katyal, Private Accountability in the Age of Artificial Intelligence, 66 UCLA L. Rev. 54, 108–29 (2019) (proposing whistleblower protections to protect civil rights in the AI context).
U.S. Gov’t Accountability Off., GAO-25-107653, Artificial Intelligence: Generative AI Use and Management at Federal Agencies (2025) (reporting that many agencies reported “challenges in attracting and developing individuals with expertise in generative AI”); Jessica Tillipman & Steven L. Schooner, FEATURE COMMENT: Institutional Amnesia and the Neglect of the Federal Acquisition Workforce, 67 Gov’t Contractor ¶ 182, 3–4 (2025) (observing that the “severe shortage of [federal government] staff with AI expertise” exacerbates risks to national security and critical infrastructure).
See Mathieu Pollet, EU Views Break from US as ‘Unrealistic’ Amid Global Tech Race, Politico (Apr. 30, 2025, at 04:01 CET), https://www.politico.eu/article/eu-us-big-tech-companies-trade-international-digital-strategy-europe-competitiveness/ [https://perma.cc/HDZ3-Y6XB] (noting call by member of European Parliament for the EU to “end its debilitating dependence on American tech groups”).
See, e.g., Rashida Richardson, Racial Segregation and the Data-Driven Society: How Our Failure to Reckon with Root Causes Perpetuates Separate and Unequal Realities, 36 Berkeley Tech. L.J. 1051, 1088–89 (2021) (arguing that “the data-driven technology sector . . . needs a transformative justice framework,” including “policy interventions” to cure algorithmic bias).
AI R&D Investments FY 2019-FY 2025, Networking & Info. Tech. R&D Program (2025) (reporting a FY25 budget request for AI spending of $3.316 billion); 2025 AI Index Report, Stanford Inst. for Hum.-Centered A.I. (2025) (reporting that U.S. private-sector investment in AI reached $109.1 billion in 2024).
Jessica Tillipman, Buying Blind: Corruption Risk and the Erosion of Oversight in Federal AI Procurement, 55 Pub. Cont. Law J. (forthcoming Winter 2026).
See generally Nari Johnson, Elise Silva & Hoda Heidari, Want Accountable AI in Government? Start with Procurement, Tech. Pol’y Press (July 15, 2024) (arguing that governments can promote equitable AI by embedding requirements into procurement processes); AI Procurement in a Box: AI Government Procurement Guidelines, World Econ. F. (2020) (providing a model framework for public procurement of AI).
The Federalist No. 10 (James Madison) (warning against the concentration of power by factions).
Johnson et al., supra note 359.
See Christine Mastromonaco et al., California’s AI Laws Are Here—Is Your Business Ready?, Pillsbury (Feb. 7, 2025), https://www.pillsburylaw.com/en/news-and-insights/california-ai-laws.html [https://perma.cc/29ZW-MSLL] (describing eighteen AI bills signed into law in California that contain various mandates).
Colo. Rev. Stat. Ann. §§ 6-1-1701 to -1707 (West 2024); Cobun Zweifel-Keegan & Andrew Folks, The Colorado AI Act: What You Need to Know, Int’l Ass’n of Priv. Profs. (May 21, 2024), https://iapp.org/news/a/the-colorado-ai-act-what-you-need-to-know [https://perma.cc/24ZV-8354] (“The Colorado AI Act focuses on . . . discrimination caused by AI in the context of a consequential decision.”).
N.Y.C. Admin. Code §§ 20-870 to -874 (2021); N.Y.C. Dep’t of Consumer & Worker Prot., Notice of Adoption of Final Rule (Apr. 6, 2023) (prohibiting use of automated employment decision tools unless certain bias audit requirements are met).
See Francesca Palmiotto, The AI Act Roller Coaster: The Evolution of Fundamental Rights Protection in the Legislative Process and the Future of the Regulation, 16 Eur. J. Risk Reg. 770, 778 (2025) (describing the ways the EU AI Act protects fundamental rights, including its “provisions on quality of training data and bias prevention”).
See, e.g., Daniel A. Mazmanian, John L. Jurewitz & Hal T. Nelson, State Leadership in U.S. Climate Change and Energy Policy: The California Experience, 29 J. Env’t & Dev. 51, 69 (2020) (describing California’s role as a leader in the shaping of federal environmental policy); The Journey to Marriage Equality in the United States, Hum. Rts. Campaign, https://www.hrc.org/our-work/stories/the-journey-to-marriage-equality-in-the-united-states [https://perma.cc/VS7L-B4M4] (last visited Jan. 28, 2026) (tracing the history of same-sex marriage legalization, from state to nationwide).
See January 2025 AI Order, supra note 2 (“This order revokes certain existing AI policies and directives that act as barriers to American AI innovation . . . .”).
See, e.g., Alexandra Alper & Jody Godoy, AI Execs Say US Must Increase Exports, Improve Infrastructure to Beat China, Reuters (May 8, 2025, at 12:46 ET), https://www.reuters.com/world/us/us-ai-execs-give-congress-policy-wishlist-beating-china-2025-05-08/ [https://perma.cc/YM2M-WUDN] (reporting on Senate hearing discussing the risks of AI restrictions on development).
Article 5 of the Act prohibits the marketing and use of AI that manipulates or deceives users, or that exploits the vulnerabilities of marginalized groups. 2024 O.J. (L 1689) art. 5(1).
See Benjamin Cedric Larsen, The Geopolitics of AI and the Rise of Digital Sovereignty, Brookings (Dec. 8, 2022), https://www.brookings.edu/articles/the-geopolitics-of-ai-and-the-rise-of-digital-sovereignty/ [https://perma.cc/3RLF-AHXC] (describing efforts in the EU and China to achieve digital sovereignty).
See, e.g., Meese v. Keene, 481 U.S. 465, 479–80 (1987) (upholding statute’s requirement that “political propaganda” be labeled as such, because the law supported the public’s right to know the source of materials); Doe v. Reed, 561 U.S. 186, 196–98 (2010) (holding that states’ choice to disclose referendum petition signatures is permissible because it significantly furthers the State’s interest in preserving electoral integrity); Zauderer v. Off. of Disciplinary Couns., 471 U.S. 626, 651–52 (1985) (upholding requirement that lawyers disclose contingency-fee arrangements in advertising).
Rust v. Sullivan, 500 U.S. 173, 194–95 (1991) (upholding Title X regulations barring federally funded family planning clinics from abortion counseling or referrals); United States v. Am. Libr. Ass’n, 539 U.S. 194, 210–14 (2003) (upholding under the First Amendment a statute’s requirement that public libraries receiving federal grants install internet filters).
Agency for Int’l Dev. v. All. for Open Soc’y Int’l, 570 U.S. 205, 214–16 (2013).
January 2025 AI Order, supra note 2, at § 1 (asserting that the United States “must develop AI systems that are free from ideological bias” as a rationale for revoking AI safeguards).
468 U.S. 609 (1984).
600 U.S. at 181.
Coal. for TJ, 68 F.4th, at 874, 885–86.
See, e.g., Presidential Comm’n on the Sup. Ct. of the U. S., Final Report of the Presidential Commission on the Supreme Court of the United States (2021) (providing a comprehensive analysis of arguments for and against Supreme Court reforms). See generally Confusion and Clarity in the Case for Supreme Court Reform, 137 Harv. L. Rev. 1634 (2024) (“[Today’s Court is] hampering efforts to promote racial equality. Proponents of reform should be clear that this is why they object—and defenders of the Court’s status should be made to answer these objections directly.”).
See Carol Anderson, Donald Trump Is the Result of White Rage, Not Economic Anxiety, Time (Nov. 16, 2016, at 12:22 ET), https://time.com/4573307/donald-trump-white-rage/ [https://perma.cc/6H37-PZU9] (exploring the rise of Donald Trump as backlash against advancements in racial equity); Terry Smith, White Backlash in a Brown Country, 50 Val. U. L. Rev. 89, 92, 98 (2015) (describing the “adverse reaction of whites to the progress of members of a non-dominant group” as “a recurring and transformative feature of American politics”).
