Open PDF in Browser: Professor Suzette Malveaux,* Foreword: 2024 Ira C. Rothgerber Jr. Conference: Artificial Intelligence and the Constitution
For the first time, the Byron R. White Center for the Study of American Constitutional Law and the Silicon Flatirons Center for Law, Technology, and Entrepreneurship came together to hold a joint conference on Artificial Intelligence (AI) and the Constitution. We decided to merge our annual conferences[1] to explore one of the most important and consequential intersections of our time: AI and the Constitution—topics we study and deeply care about. During this time of rapid and profound technological transformation, it is even more imperative that we come out of our academic silos and work together.
The partnership was destined. The Byron R. White Center believes that an informed and engaged community is essential to our constitutional democracy. As we share on our website, a core piece of our mission is to “support excellence in Constitutional legal scholarship . . . and expand public knowledge and informed discussion about the Constitution.”[2] Similarly, the mission of Silicon Flatirons includes initiating, sustaining, and elevating the conversation about technology law, policy, and entrepreneurship.[3] Our joint AI and the Constitution conference aimed to do just that.
AI has advanced rapidly in the last couple years (even months), infiltrating our lives on every dimension. This is particularly true when it comes to the law. On the one hand, AI has brought about tremendous progress, which may enable large language models (LLMs) to perform legal work and analysis that opens the doors of justice to the millions of marginalized Americans shut out of the civil justice system. They can quickly, cheaply, and often reliably conduct research, summarize the law, draft pleadings, edit documents, and answer questions—often at the level of many junior associates. On the other hand, this nascent tsunami of AI advances may impact a number of our most cherished constitutional rights, compromising voting, privacy, property rights, free speech, safety, and employment. These include purging eligible voters from the rolls, targeting vulnerable communities through location technology, denying artists’ intellectual property rights to their creative works, and collecting and using consumer data without consent.
Through a series of panel discussions, our joint conference sought to explore various emerging constitutional issues implicated by AI’s rapid progress. We were fortunate to have scientists, constitutional law scholars, lawyers, policymakers, community members, and law students from around the country join us to grapple with these tough questions.
The first panel examined how the right to privacy should and can be protected in light of AI’s exponential growth. From both a wide angle and narrow lens, the panelists considered this topic. They engaged each other on how to define and protect “sensitive information”; protect the data privacy of marginalized groups vulnerable to exploitation, oversurveillance, and political manipulation; curtail law enforcement’s misuse of facial recognition technology; and use local laws to check algorithms used in hiring decisions.
The second panel examined the opportunities and limitations of using AI when interpreting the Constitution and other legal documents. Potential benefits include greater efficiency, objectivity, consistency, accuracy, clarity, and less bias. However, they must be counter‑weighed with potential detriments. These include eliminating beneficial discretion and human judgment, masking process, and contributing to the illusion of AI objectivity. The panelists grappled with the nature of legal interpretation and the role of human judgment, tethering the debate to familiar ones between textualists, originalists, and legal pragmatists.
The third panel explored the question of whether AI‑generated speech is protected by the First Amendment. While this constitutional amendment has protected human (and corporate) speakers and listeners, what role should it play when it comes to speech generated by machines? The panelists also considered to what extent free speech protection applies to the creation and dissemination of AI‑generated material, and whether individuals have a First Amendment right to such material. Finally, the panelists grappled with the thorny question of what consequences should come of AI speech that is inaccurate, defamatory, or misleading (should it mirror those from human speech?), and what legal frameworks can address this.
In sum, the Conference itself was an incredible gathering of some of the leading thinkers on the topic of AI and the Constitution. This Symposium Issue offers a sampling of some of the brilliant ideas shared at the Conference about the power and promise of AI at this critical juncture.
This Issue sets the table with Professor Surden’s keynote address.[4] Covering an audience ranging from scientists to constitutional scholars to law students to lay persons, he offered an accessible and engaging primer on AI that appealed to everyone (not an easy feat!). He shared the history and evolution of AI and its particularly exciting trajectory over the last couple of years. At Professor Surden’s prompting, ChatGPT using GPT‑4 drafted in real‑time a well‑reasoned motion to dismiss in response to a complaint, demonstrating to the audience its current power and awesome potential. What would have been impossible just a year ago is now real. But, as Professor Surden warned, the technology is not infallible. He explained how AI can and must be used responsibly in the legal profession, urging treating AI with the same careful supervision granted to an excellent third‑year law student. Finally, Professor Surden posed the question of whether ChatGPT now and in future iterations should go so far as to interpret the Constitution or other legal documents. He wisely counseled us to take future predictions that extend beyond a few years with a grain of salt before sharing some of his own modest ones. He ended on a note of cautious optimism, urging us to understand AI’s limitations, while leaning into this moment and opportunity to make AI a tool to improve the law for all.
With Professor Surden, Andrew Coan, Associate Dean and constitutional law scholar, tackled the normative question: Just because AI LLMs can do constitution analysis, should they? Should judges and regulators cede this ground? In their article, Artificial Intelligence and Constitutional Interpretation, Professors Coan and Surden warn how LLMs’ veneer of neutrality and objectivity can trick unsuspecting judges and government decision‑makers into missing subtleties and value‑laden choices.[5] For example, LLMs’ polished, well‑articulated, confident pronouncements fail to reveal that their answers may be influenced by technicalities, such as inherent randomness, language selection, and user‑prompting choices. Professors Coan and Surden situate this current risk in the larger debate between legal formalists and legal realists. The former are current scholars who promote textualism and originalism, who may now see AI as the next frontier in eliminating the subjectivity of human judging. The latter are their critics, deriding the methodology as “mechanical jurisprudence,” unmoored from reality.[6] The authors pose the provocative inquiry: “Is ChatGPT the interpretation machine that formalists have been dreaming of for two hundred years?”[7]
Their article looks at the potential uses of LLMs in constitutional interpretation, the advantages and disadvantages of those uses, and how this balance shifts according to institutional context and particular use. Using simple simulations, the authors conclude “that there is no avoiding the burdens of judgment.”[8] Professors Coan and Surden neither idolize nor demonize LLMs or humans; they simply compare them—not to see who is a better constitutional interpreter, but rather to see if the former can diminish “the burdens of normative judgement.”[9] As it turns out, the answer is no; it is only displaced. This article’s purported overarching goal “to initiate a conversation between experts in constitutional interpretation and experts in artificial intelligence” does just that.[10] Whether in the legal formalist or legal realist camp, the reader is challenged to imagine what is possible at the intersection of constitutional law and AI.
In the Issue’s next article, Algorithmic Bias and Accountability: The Double B(l)ind for Marginalized Job Applicants, Professor Chris Chambers Goodman illustrates the clash between constitutional rights and the emergence of AI in employment decisions.[11] Constitutional rights to privacy, safety, dignity, and due process are all at stake at the service of expediency and cost savings in hiring decisions. With the ease of a click, job‑seekers can apply for jobs online, flooding potential employers with an avalanche of résumés like never before. Separating the wheat from the chaff has become increasingly difficult, leading employers to use AI to cull through candidates and to even make final selections. The cost savings and efficiency are undoubtedly appealing. But in addition, employers tout AI for its role in eradicating bias in the hiring process through the use of algorithms that detect “merit.”[12]
Professor Goodman challenges us to consider how bias is embedded in AI, magnifying the impact of bias on an untold scale. She explores and critiques efforts being made to curb such bias. For example, the Equal Employment Opportunity Commission (EEOC) has issued guidance urging employers to assess the adverse impact of AI on selection procedures under Title VII of the Civil Rights Act of 1964.[13] The Biden Administration has issued an executive order, 2023 AI EO, which provides safety, security, and privacy protection standards.[14] Moreover, state and local governments have drafted AI regulations on the basis of audits, assurances, and ethical risk and algorithmic bias assessments.[15]
Professor Goodman posits that one reason for AI bias is that data pools rely on large volumes of data over long periods of time, which reflect trends no longer accurate or fair today. Where racial minorities have been under‑ or over‑represented, such disparity is frozen in time, with LLMs making predictions based on historic (or pre‑historic!) times. Professor Goodman reveals how machines learn “in ways that exacerbate, rather than alleviate, biases in hiring.”[16] Professor Goodman proposes a test for assessing fairness of the hiring process at every stage, which may require vulnerable groups to disclose—and permit AI to retain—private information. The solution, she concedes, may result in people of color and other marginalized groups being put in a “double‑bind”—trading privacy for fairness.[17]
Finally, Professor Yonathan A. Arbel offers another twist for the reader to consider. In Judicial Economy in the Age of AI, he describes the paucity of court access that exists for most Americans and how AI may be an antidote.[18] Farming legal tasks out to AI greatly reduces costs, opening the legal system to the vast majority of Americans priced out of it. Indeed, Professor Arbel notes that there is “a sea change in the patterns of technological adoption,” with small firms now adopting AI—prioritizing convenience and accessibility over reliability.[19] However, the antidote comes with its own irony. The author contends that the spike in court access will be matched with a commensurate reduction in justice itself. He predicts that a beleaguered court system, already short‑staffed and overwhelmed, will recalibrate procedurally or substantively to alleviate the pressure. “Paradoxically, what we gain in access to justice we might lose in the delivery of justice,” he asserts.[20]
Professor Arbel accomplishes his goal of “sound[ing] the alarm about judicial economy in the age of AI.”[21] In response, he proposes that the judicial process itself proactively integrate AI so that it benefits from this tool—imperfect as it may be. He contends that such judicial integration will ensure that any victory in the access to justice battle is not a pyrrhic one.
Each of these contributions to this Symposium Issue illustrates the complexities of the relationship between constitutional law and AI. This nexus deserves proactive study and bold solutions, given the precious fundamental constitutional rights at stake and the potentially tectonic shift in technology and its use in society. I am deeply indebted to those who have offered just that by participating in the 2024 Rothgerber Conference: AI and the Constitution and contributing to this Issue. There is much work to be done and promise for the future.
* Professor Malveaux is the former Moses Lasky Professor of Law and Director of the Byron R. White Center for the Study of American Constitutional Law at the University of Colorado Law School. She is now the Roger D. Groot Professor of Law at the Washington and Lee University School of Law.
- The Rothgerber Conference on Constitutional Law and the Silicon Flatirons Artificial Intelligence Conference, respectively. ↑
- The Byron R. White Center for the Study of American Constitutional Law, https://www.colorado.edu/law/research/byron-white-center [https://perma.cc/FLQ9-SH8F]. ↑
- Mission, Vision, and Operational Principles, Silicon Flatirons, https://siliconflatirons.org/about‑us/mission‑vision‑values [https://perma.cc/856A-ZERH]. ↑
- Harry Surden, Professor of Law, University of Colorado Law School, Artificial Intelligence and Law—An Overview of Recent Technological Changes: Keynote Address at the 2024 Ira C. Rothgerber Jr. & Silicon Flatirons Conference on Artificial Intelligence and Constitutional Law (Apr. 19, 2024). ↑
- Andrew Coan & Harry Surden, Artificial Intelligence and Constitutional Interpretation, 96 U. Colo. L. Rev. 375, 429 (2025). ↑
- Id. at 416. ↑
- Id. at 418. ↑
- Id. at 420. ↑
- Id. at 421. ↑
- Id. at 422. ↑
- Chris C. Goodman, Algorithmic Bias and Accountability: The Double B(l)ind for Marginalized Job Applicants, 96 U. Colo. L. Rev. 501 (2025). ↑
- Id. at 502. ↑
- Id. at 517−19. ↑
- Id. at 519−21. ↑
- Id. at 532−42. ↑
- Id. at 505. ↑
- Id. ↑
- Yonathan A. Arbel, Judicial Economy in the Age of AI, 96 U. Colo. L. Rev. 549 (2025). ↑
- Id. at 553. ↑
- Id. at 549. ↑
- Id. at 554. ↑