Estimated Time to Read: 9 minutes
Artificial intelligence (AI) policy is advancing at a pace that legislatures have struggled to match. In that vacuum, courts, professional associations, ethics boards, and regulatory agencies have begun defining how AI should be used, governed, and constrained. The American Bar Association’s (ABA) Year 2 Report on the Impact of AI on the Practice of Law reflects this reality clearly.
Although the ABA report does not carry the force of law, it establishes expectations that influence judicial behavior, attorney ethics enforcement, agency practices, and future legislation. These expectations often harden into norms long before lawmakers formally debate them. When legislatures delay engagement, governance does not pause. It simply shifts to institutions that are not directly accountable to voters.
For policymakers concerned with democratic legitimacy, separation of powers, and limited government, that trend alone should prompt closer scrutiny.
Courts Are Shaping Artificial Intelligence Policy
One of the clearest themes in the ABA report is the speed at which courts are responding to artificial intelligence. Judges are already issuing guidance on AI use in chambers, evaluating AI-generated filings, confronting deepfake evidence, and developing disclosure expectations for litigants and attorneys.
These judicial responses are understandable. Courts must resolve disputes in real time, and AI is already embedded in legal practice. But the result is that policy decisions are being made incrementally through precedent rather than deliberately through legislation.
Once courts establish standards for acceptable AI use, those standards shape behavior far beyond the courtroom. Lawyers adapt their practices. Agencies follow judicial cues. Businesses attempt to conform to avoid liability. By the time lawmakers act, they often find that the policy environment has already been set.
Courts are not designed to balance innovation, economic growth, fiscal restraint, and long-term regulatory consequences. Yet absent legislative leadership, that is effectively the role they are being forced to play.
Soft Law Is Becoming Artificial Intelligence Regulation
The ABA report consistently favors soft law approaches to artificial intelligence governance. These include ethical frameworks, professional standards, best practices, and risk management models rather than explicit statutory mandates. In theory, soft law is flexible and adaptive. In practice, it frequently becomes enforceable through courts, licensing bodies, and liability standards.
When a judge asks whether an AI system was deployed responsibly, the answer is often measured against prevailing professional guidance. When a regulator evaluates compliance, it looks to industry norms. Over time, these norms function as binding rules without ever being voted on.
This dynamic matters because it shifts power away from elected lawmakers and toward professional institutions. Decisions about transparency, disclosure, bias mitigation, and acceptable risk thresholds are made through consensus rather than public debate. Once embedded, those standards are difficult to unwind legislatively.
The ABA report does not obscure this reality. It embraces it. That makes it all the more important for lawmakers to understand the downstream implications.
Artificial Intelligence Risks Do Not Require Centralized Regulation
The ABA is correct to identify real risks associated with artificial intelligence. Generative AI systems can hallucinate false information, expose confidential data, amplify bias, and accelerate the spread of disinformation. These risks are not speculative, and ignoring them would be irresponsible.
However, acknowledging risk does not automatically justify broad regulatory expansion. The ABA report largely assumes that stronger governance frameworks and institutional oversight are the appropriate response. That assumption deserves careful examination.
Artificial intelligence is still evolving rapidly. Early regulatory decisions risk locking in outdated assumptions, favoring large incumbents, and discouraging experimentation. Overly prescriptive frameworks may reduce risk in the short term while imposing long-term costs on innovation, competition, and economic growth.
Human accountability is a legitimate policy goal. Delegating expansive authority to unelected bodies to define and enforce AI norms is far more questionable.
House Bill 149 Turns Artificial Intelligence Soft Law Into Statute
Texas House Bill 149 (HB 149), known as the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), provides a real-world example of how institutional AI norms translate into legislation once lawmakers step in.
HB 149 was originally filed as a sweeping and prescriptive proposal that sought to regulate a wide range of AI uses, create new enforcement mechanisms, and establish broad oversight authority. Over the course of the legislative process, the bill was significantly narrowed in response to concerns about overreach, enforceability, and constitutional risk.
The final version reflects a more restrained approach, but it still codifies many of the same governance principles emphasized in the ABA report. These include transparency requirements, biometric data protections, reliance on external risk frameworks, and centralized enforcement authority.
HB 149 demonstrates that once soft law norms exist, legislative action often takes the form of ratification rather than reconsideration.
House Bill 149 Expands Artificial Intelligence Oversight and Costs
The final Legislative Budget Board fiscal note confirms that HB 149 is not a cost-neutral or temporary intervention. As passed, the bill is projected to impose a negative impact of $24.9 million on General Revenue during the 2026 to 2027 biennium, followed by recurring annual costs of more than $10.2 million every year thereafter.
Implementation of HB 149 requires 20 new full-time state employees, split between the Department of Information Resources (DIR) and the Office of the Attorney General (OAG). These positions include artificial intelligence technologists, compliance analysts, investigators, attorneys, and IT personnel. Payroll, benefits, and related personnel costs alone exceed $3 million per year, with additional recurring operating costs layered on top.
Beyond staffing, the bill commits the state to substantial technology spending. DIR anticipates $3 million per year for cloud computing, sandbox hosting, security audits, and vendor support, with individual sandbox participants costing between $50,000 and $400,000 per instance. The OAG also requires more than $4.1 million in one-time technology costs to build complaint intake and enforcement systems, along with ongoing licensing and data center expenses.
The fiscal note further assumes $4 million per year in outside expert consulting costs so the Attorney General can litigate AI-related cases involving complex machine learning evidence. While the bill allows the state to recover fines and attorney’s fees, the LBB explicitly notes that this revenue is speculative and cannot be reliably estimated or used to offset costs.
Taken together, HB 149 establishes a permanent and expensive enforcement apparatus that grows government capacity well beyond a narrow consumer protection role.
Early Risks of Artificial Intelligence Regulation
HB 149 also highlights the risks of regulating artificial intelligence before the technology and its use cases have stabilized. The bill defines artificial intelligence systems broadly, capturing a wide range of machine-based tools that influence physical or virtual environments.
This breadth ensures coverage but creates uncertainty for developers who may not realize they fall within the statute’s scope. Compliance obligations, investigative authority, and documentation requirements may discourage experimentation, particularly among small businesses and open-source developers who lack the legal and financial resources of large technology firms.
Although the bill includes cure provisions and limited liability protections, enforcement authority remains significant and subjective. Intent, foreseeability, and compliance with evolving standards are difficult to assess in probabilistic systems. The result is a chilling effect that disproportionately affects smaller innovators.
This outcome mirrors concerns raised indirectly in the ABA report about premature standard-setting and the consolidation of AI development among large, well-resourced entities.
Artificial Intelligence and Access to Justice Tradeoffs
Both the ABA report and HB 149 emphasize the potential of AI to expand access to justice. AI tools can help self-represented litigants navigate complex systems and enable legal aid organizations to serve more clients at lower cost.
At the same time, regulatory complexity and compliance costs threaten to undermine those benefits. If AI governance frameworks favor expensive enterprise solutions, smaller firms and nonprofits may be priced out. This risks reinforcing inequality rather than alleviating it.
Lawmakers should recognize that regulation often shapes markets as much as it constrains behavior. Artificial intelligence policy that prioritizes procedural compliance over outcomes may unintentionally limit access to the very tools it seeks to regulate responsibly.
Artificial Intelligence Regulation and Federal Preemption
The concerns raised by the ABA report and illustrated by HB 149 also help explain why Texas Policy Research (TPR) has previously argued that a temporary federal pause on state-level AI regulation may be preferable to a rapid expansion of divergent state regimes.
Artificial intelligence operates across state lines and touches nearly every sector of the national economy. HB 149 demonstrates how quickly well-intentioned state regulation can expand into a costly and permanent enforcement structure. If replicated across dozens of states, this approach would impose significant compliance burdens, favor incumbents, and fragment the national innovation landscape.
A temporary federal pause does not require permanent national micromanagement. It provides time to avoid regulatory fragmentation, prevent premature standard-setting, and debate a coherent framework that protects civil liberties without entrenching bureaucratic growth.
Artificial Intelligence Liability Is Being Set by Courts
Another common thread between the ABA report and HB 149 is the expansion of liability. As AI systems influence hiring, healthcare, finance, and legal outcomes, disputes over responsibility are inevitable.
HB 149 vests enforcement authority in the Attorney General and authorizes civil investigative demands and penalties. While the bill includes some safeguards, liability standards remain unclear and heavily dependent on interpretation.
When liability norms are defined primarily through enforcement actions and litigation, uncertainty grows. That uncertainty discourages innovation and investment while empowering regulators and courts to shape policy indirectly.
Legislatures have an opportunity to clarify expectations proactively. HB 149 shows how difficult that task becomes once institutional norms are already in place.
What Lawmakers Should Learn From the ABA Report and HB 149
Taken together, the ABA artificial intelligence report and House Bill 149 offer a cautionary lesson. When legislatures delay engagement, AI governance does not stand still. It evolves through courts, ethics bodies, and institutional frameworks that are not designed for democratic accountability.
When lawmakers eventually act, they often codify those norms rather than reassessing them. The result is governance by default, regulatory expansion, and long-term fiscal commitments that grow government well beyond initial intent.
Artificial intelligence policy should be debated openly, crafted deliberately, and constrained by legislative oversight. That requires lawmakers to engage early, question assumptions, and resist the temptation to outsource governance to institutions operating outside the electoral process.
Texas has taken its first major step into AI regulation. Whether future steps reflect restraint, accountability, and innovation will depend on whether lawmakers learn from this experience or repeat it.
Texas Policy Research relies on the support of generous donors across Texas.
If you found this information helpful, please consider supporting our efforts! Thank you!