89th Legislature

HB 149

Overall Vote Recommendation
Vote No; Amend
Principle Criteria
Free Enterprise
Property Rights
Personal Responsibility
Limited Government
Individual Liberty
Digest
HB 149 introduces a comprehensive regulatory framework for the use, development, and deployment of artificial intelligence (AI) systems in Texas. The bill amends sections of the Business & Commerce Code and establishes new legal provisions under Subtitle D, titled Artificial Intelligence Protection. Its primary goals are to ensure transparency, protect consumer rights, and set guardrails for the responsible use of biometric data in AI applications.

Key elements of the bill include the clarification that publicly available images or media do not constitute consent for the capture or commercial use of biometric identifiers, such as facial or voice data. If biometric data is used for training AI systems but is later applied commercially, the data holder must comply with existing biometric data retention and destruction requirements. These provisions aim to enhance individual privacy protections in an age of widespread AI surveillance and facial recognition technology.

The bill also defines core terms such as "artificial intelligence system," "developer," and "deployer," and applies its mandates to any business that promotes, uses, or develops AI systems in Texas. It mandates that processors of personal data, particularly those involving AI systems, implement strong data protection practices and assist with breach notification. Furthermore, the bill establishes the Texas Artificial Intelligence Council, a state oversight body tasked with advising the government on AI-related policy and ensuring compliance with this new legal framework.

Lastly, HB 149 includes a preemption clause, barring local governments from enacting separate AI regulations, thereby aiming to create a unified statewide approach. It emphasizes risk transparency, ethical use, and safeguards for civil liberties, reflecting growing concern over the societal impacts of unchecked AI technology.

The originally filed version of HB 149 was a far-reaching and prescriptive proposal aimed at tightly regulating the development and deployment of artificial intelligence (AI) systems in Texas. It introduced a broad scope of definitions and prohibited uses, including bans on AI-based political viewpoint discrimination, social credit scoring by government entities, and biometric surveillance without explicit consent. It also imposed civil penalties, created consumer appeal rights, and established a detailed enforcement mechanism through the Texas Attorney General’s office. Additionally, the original bill introduced a Regulatory Sandbox Program for AI innovation and granted significant authority to a newly created Texas Artificial Intelligence Council, including investigative powers and policy recommendation duties.

In contrast, the Committee Substitute takes a more restrained and targeted approach. Rather than building a standalone regulatory regime, it focuses on integrating AI oversight into existing legal structures, primarily through amendments to the Business & Commerce Code. It narrows its scope to biometric data privacy and data processor responsibilities, removing much of the detailed language concerning prohibited uses and speech protections found in the original. Notably, the substitute drops the more politically charged provisions, such as those related to political censorship by interactive computer services, and omits the sandbox program entirely.

Furthermore, the substitute version redefines the role of the Texas Artificial Intelligence Council, scaling it back to a more conventional advisory body without regulatory enforcement authority. It eliminates the original’s mandate for the Council to investigate market manipulation or influence by large tech firms. The enforcement structure in the substitute is also more streamlined, focusing on biometric misuse and existing privacy laws rather than layering new civil penalties and appeal rights.

Overall, the Committee Substitute reflects a deliberate effort to soften the bill's regulatory burden, likely in response to stakeholder concerns about overreach, enforceability, and constitutional challenges. While the original version sought to establish Texas as a national leader in AI oversight, the substitute represents a pivot toward harmonizing AI policy within existing privacy and consumer protection frameworks.
Author
Giovanni Capriglione
Angie Chen Button
Greg Bonnen
Angelia Orr
Salman Bhojani
Co-Author
Cody Harris
Suleman Lalani
John Lujan
William Metcalf
Mihaela Plesa
Sponsor
Charles Schwertner
Co-Sponsor
Royce West
Fiscal Notes

According to the Legislative Budget Board (LBB), the fiscal implications of HB 149 indicate a significant cost to the state over the next biennium. The estimated net impact to General Revenue-related funds is projected at -$25.06 million through August 31, 2027, with recurring annual costs of over $10 million in subsequent years. These expenses stem primarily from the bill’s implementation of enforcement mechanisms, a new regulatory sandbox program, and the establishment of the Texas Artificial Intelligence Council.

The Department of Information Resources (DIR) will require 8 new full-time equivalent (FTE) employees to manage the AI Sandbox and support the council, including AI technologists, compliance analysts, and program coordinators. Their annual personnel costs are projected at roughly $1.07 million. Meanwhile, the Office of the Attorney General (OAG) anticipates hiring 12 FTEs to handle investigation and enforcement responsibilities related to AI misuse, costing approximately $1.27 million annually. These roles include attorneys, investigators, data analysts, and legal support staff. Additional recurring operating costs, including retirement contributions and equipment, are estimated at $700,743 per year.

A major cost driver is the technology infrastructure required for both enforcement and the sandbox. DIR anticipates spending $3 million annually, including $2 million for cloud computing services and $1 million for security audits and vendor support. The OAG’s technology costs are also significant, with a one-time setup cost of $4.1 million in FY 2026 for complaint systems, case management tools, and IT hardware, along with an annual recurring cost of approximately $80,000 for software licenses and data center operations.

While the bill empowers the Attorney General to impose civil penalties and recover attorney’s fees, the fiscal note emphasizes that the timing and amount of this revenue are uncertain and thus not included in the cost offset. There is no anticipated significant fiscal impact on local governments, as enforcement and administration are centralized at the state level.

Vote Recommendation Notes

HB 149, while well-intentioned in seeking to regulate artificial intelligence (AI) systems to protect individual rights and promote transparency, ultimately presents structural, fiscal, and regulatory concerns that outweigh its current benefits. The bill’s framework introduces sweeping new responsibilities for state government, including the establishment of the Texas Artificial Intelligence Council and a regulatory sandbox for AI testing. These additions, while framed as innovation-friendly and advisory in nature, nevertheless create a new regulatory architecture with broad, long-term implications. Even without formal rulemaking power, the council is charged with evaluating ethics, civil liberties, and market impacts of AI—functions that could lead to mission creep and expand the regulatory state incrementally over time.

From a fiscal perspective, the bill imposes a substantial burden on taxpayers. According to the Legislative Budget Board, HB 149 is projected to have a negative fiscal impact of more than $25 million over the 2026–27 biennium, with recurring costs estimated at over $10 million annually thereafter. These costs are driven by the need for 20 new full-time employees, new enforcement technology systems, and expert consulting services for litigation and oversight. Although the bill provides a mechanism for recovering some costs through administrative penalties, the revenue stream from such penalties is speculative and not expected to offset the substantial outlays required for implementation and maintenance. In short, Texas taxpayers are likely to bear the financial burden of a rapidly growing regulatory infrastructure.

The regulatory burden created by HB 149 remains a point of concern, particularly for small businesses and open-source AI developers. Despite revisions that removed some of the most prescriptive mandates from the original bill—such as overly broad provisions on content moderation and political viewpoint discrimination—the substitute version still requires developers and deployers to respond to civil investigative demands, document internal AI safeguards, and comply with rules regarding biometric data that may be incompatible with widely used AI training practices. These compliance obligations, while well-meaning, could stifle innovation, particularly for startups and non-commercial contributors who lack the legal and financial resources to interpret and meet these requirements. The potential for uneven enforcement further amplifies this concern.

Moreover, concerns raised by a broad coalition of free-market and technology policy organizations remain only partially addressed. These groups warn that HB 149 mirrors interventionist models in states like Colorado and California and could harm Texas’s standing as a pro-innovation, low-regulation environment. While the inclusion of a regulatory sandbox is a nod toward flexibility, it is ultimately embedded within a larger framework that continues to centralize regulatory authority at the state level. The balance between oversight and opportunity is still off-kilter; without stronger protections for open development and clearer guardrails around enforcement, the bill risks discouraging exactly the type of innovation it claims to foster.

Given these unresolved issues—expansion of government scope, high taxpayer costs, and chilling effects on innovation—Texas Policy Research recommends that lawmakers vote NO on HB 149 unless amended as described below.

Suggested Amendments:

  • Sunset Review for the Texas Artificial Intelligence Council: Add a provision requiring the Texas Artificial Intelligence Council to undergo a full Sunset Advisory Commission review by 2030. This would ensure long-term accountability and prevent permanent expansion of the administrative state without measurable value or outcomes.
  • Cap on Council Budget and Staffing: Reinstate language from the originally filed bill that limits the Council’s budget to a fixed percentage (e.g., no more than 2–4%) of the Department of Information Resources’ total budget and caps staffing at a specific number. This would prevent bureaucratic bloat and help contain long-term taxpayer costs.
  • Strengthen Open-Source Protections: Exempt developers of non-commercial, open-source AI models from enforcement actions under Subchapter C, provided their systems are not deployed for commercial or public-facing purposes. This would encourage innovation and protect contributors from liability for tools not intended for regulated or sensitive use cases.
  • Add a “Small Business Exemption” or Tiered Compliance Threshold: Create a tiered compliance system where businesses under a certain size (e.g., fewer than 25 employees or less than $10 million in revenue) are subject to lighter documentation and enforcement requirements. This would prevent the bill from unintentionally favoring large corporations and hindering small Texas-based AI developers and startups.
  • Clarify Enforcement Scope and Narrow Investigatory Authority: Limit the Attorney General’s civil investigative demand authority to systems that have been deployed and are suspected of causing actual harm or violations. This would prevent preemptive investigations into pre-deployment or internal development activity, reducing fear-driven chilling effects on innovation.
  • Remove or Reframe Biometric Data Restrictions: Clarify that the prohibition on inferring consent from publicly available media for biometric use does not apply to datasets used solely for research, testing, or pre-deployment development. This would avoid unintentionally banning common AI training practices that rely on publicly available imagery and videos.
  • Remove or Limit Broad Language on “Ethics”: Narrow the Council’s mandate to focus on specific statutory duties and remove open-ended responsibilities to evaluate the "ethics" or "influence" of AI systems. Vague ethical mandates risk becoming a backdoor for political interference or mission creep over time.
  • Add an Innovation Impact Statement Requirement: Require the Council or the Attorney General to issue a public “Innovation Impact Statement” before initiating enforcement action or recommending major changes in AI policy. This provides transparency and encourages deliberation before actions that may affect the state’s innovation climate.
  • Narrow Scope of Applicability: Limit the bill’s scope to only systems that are deployed for use in public-facing or high-risk environments (e.g., healthcare, finance, law enforcement). This would avoid over-regulating general-purpose technologies or tools not directly interacting with consumers or critical services.
  • Add a Statutory Review Clause:  Include a provision requiring the Legislature to review and reauthorize the AI enforcement provisions of Subchapter C within five years. This would allow for reassessment of the enforcement framework as the technology and market evolve.

With significant amendments, HB 149 could become a more balanced and innovation-positive piece of legislation. Until those changes are made, however, this bill risks setting a precedent that could undermine Texas’s leadership in the tech sector and expand government in ways inconsistent with limited-government principles.

  • Individual Liberty: The bill includes important safeguards meant to protect Texans from potentially intrusive or harmful applications of artificial intelligence. These include bans on government use of AI for social scoring, restrictions on the unauthorized use of biometric data, and transparency requirements for AI systems interacting with consumers. These provisions affirm the rights of individuals to autonomy and informed consent in their digital interactions. However, some of the language, particularly related to intent, discrimination, or political expression, remains vague and could chill lawful speech or development activity. For instance, overly broad definitions of manipulation or viewpoint bias might inadvertently restrict technological use cases or speech in ways that run counter to First Amendment protections.
  • Personal Responsibility: The bill imposes accountability on AI developers and deployers for how their technologies operate, particularly when those systems might influence behavior, facilitate discrimination, or fail to provide transparent outcomes. The enforcement provisions, including penalties for misuse and failure to disclose key functions, reflect the idea that creators of powerful technologies should bear responsibility for their impacts. This reflects a commitment to holding private actors accountable when their tools affect public welfare or infringe on others’ rights.
  • Free Enterprise: Even in its revised form, the bill imposes a layered regulatory scheme that could deter innovation and entrepreneurship. It creates legal uncertainty and compliance burdens, especially for small businesses and open-source developers, through documentation mandates, investigative risk, and vague operational definitions. While large corporations may be able to absorb these regulatory demands, smaller entities may find them prohibitive. This could lead to the consolidation of AI development among large firms and stifle the competitive, decentralized environment that free markets rely on.
  • Private Property Rights: The bill’s most significant contribution is in affirming the concept that biometric identifiers and personal data should not be harvested or used commercially without informed consent. In this sense, it enhances individual control over personal information, aligning with the idea that individuals should have dominion over their own identities, even in digital form. That said, the broad language around publicly available data could inadvertently limit the legal use of datasets for AI training, raising questions about what constitutes fair use and who owns or controls public digital content.
  • Limited Government: The bill expands state authority substantially through the creation of the Texas Artificial Intelligence Council, new enforcement powers for the Attorney General, and the administrative management of a regulatory sandbox. While the Council is technically advisory, its oversight and reporting mandates open the door to bureaucratic mission creep. The projected fiscal cost of over $25 million in the first biennium, along with the hiring of 20 new state employees and ongoing administrative infrastructure, marks a clear growth in state regulatory power and long-term spending. This growth contradicts limited-government principles and could be further exacerbated if the Council’s scope or authority expands in future sessions.
View Bill Text and Status