89th Legislature

SB 1964

Overall Vote Recommendation
No
Principle Criteria
Free Enterprise
Property Rights
Personal Responsibility
Limited Government
Individual Liberty
Digest
SB 1964 introduces comprehensive standards for the use of artificial intelligence (AI) systems by Texas governmental entities. The bill significantly amends Chapter 2054 of the Government Code by establishing key definitions, including “artificial intelligence system,” “consequential decision,” and “heightened scrutiny artificial intelligence system.” These definitions distinguish general AI tools from those that autonomously make or influence legally significant decisions affecting individuals’ access to government services.

The bill requires all state agencies to identify and maintain an inventory of their AI systems, with additional emphasis on those classified as “heightened scrutiny” systems. These are AI tools used to make impactful decisions without meaningful human review. Agencies must also report the purposes, risk mitigation strategies, and strategic alignment of such systems. Local governments are similarly required to assess and report on their use of these higher-risk AI systems to the Department of Information Resources (DIR).

Furthermore, the legislation mandates that agencies demonstrate compliance with a forthcoming state-developed AI Code of Ethics and minimum operational standards. These provisions are designed to ensure transparency, prevent algorithmic bias, and maintain human oversight where critical decisions are made. The bill emphasizes the integration of AI governance into state strategic planning efforts, setting a new baseline for responsible AI deployment in public administration. While the bill focuses on ethical AI usage and accountability, implementation details, such as the development of rules and oversight structures, will play a critical role in determining its long-term effectiveness and impact.

The Committee Substitute for SB 1964 makes significant changes to the originally filed version by narrowing the scope of regulatory oversight and scaling back enforcement and public transparency mechanisms. While both versions aim to establish a framework for the ethical and responsible use of artificial intelligence (AI) systems by governmental entities, the substitute shifts the bill from a highly prescriptive regulatory model to a more advisory and compliance-driven approach centered within agencies themselves.

One of the most notable changes is the removal of the Public Sector Artificial Intelligence Systems Advisory Board and the Artificial Intelligence Sandbox Program. In the original bill, these components were designed to foster inter-agency collaboration, innovation, and controlled experimentation with AI technologies. Their removal reflects a legislative pivot away from structured experimentation and centralized guidance, likely in an effort to reduce bureaucratic complexity and maintain agency autonomy.

Additionally, the substitute eliminates the public-facing enforcement provisions that would have allowed individuals to file complaints about AI systems through a dedicated Attorney General’s web portal. It also drops provisions that would have enabled the Attorney General to investigate and penalize non-compliant vendors, as well as allow state agencies to void contracts for unresolved violations. These removals dilute public accountability and weaken the mechanisms for addressing AI-related harms or rights violations.

Finally, the Committee Substitute significantly reduces transparency requirements. The originally filed bill included mandatory public disclosures, standardized notices for AI systems, and impact assessments to ensure that decisions made by AI tools were documented and assessable. These elements were intended to promote ethical usage and protect civil liberties. Their omission in the substitute version marks a shift toward internal compliance checks without public-facing transparency or redress mechanisms.

In summary, the Committee Substitute keeps the core of the AI system inventory and strategic planning but eliminates many of the more ambitious, enforceable, and transparent features of the originally filed version. The revised bill reflects a more restrained, agency-focused approach to AI governance, favoring flexibility over regulation.
Author
Tan Parker
Sponsor
Giovanni Capriglione
Fiscal Notes

According to the Legislative Budget Board (LBB), the fiscal implications of SB 1964 project a significant cost to the state over the next five fiscal years. The bill is expected to result in a negative impact of approximately $7.28 million to General Revenue during the 2026–27 biennium, with a recurring annual cost of over $4 million projected through fiscal year 2030. These costs stem primarily from the responsibilities assigned to the Department of Information Resources (DIR) for implementing new frameworks for artificial intelligence (AI) governance across state and local governments.

To support the mandates of the bill, the DIR would require the addition of 10 new full-time employees. These positions would focus on activities such as developing and publishing an AI Code of Ethics, establishing risk management and governance standards for heightened scrutiny AI systems, and providing education and training resources. Additional responsibilities include administering the AI sandbox program, handling public complaints, and coordinating with other agencies, resulting in estimated personnel costs of $2.88 million for the 2026–27 biennium.

Beyond personnel, technology infrastructure costs are also substantial. The AI sandbox program alone would require $500,000 annually for storage and computing resources. Other significant expenses include $1 million in FY 2026 and $2 million annually thereafter, for conducting AI assessments, data analysis, and related tasks. Developing AI-related training tools for state employees and the public adds another $300,000 in FY 2026, with reduced but ongoing costs in subsequent years. Combined, the DIR’s IT-related expenses are estimated at $4.4 million for the biennium.

For local governments, the bill introduces an indeterminate fiscal impact, as they would be required to adopt and implement the AI ethics and risk management standards issued by DIR, as well as report on the use of certain AI systems. The level of impact will vary based on the existing resources, technological capacity, and staffing levels within each local government entity.

Vote Recommendation Notes

SB 1964 aims to establish a regulatory framework for how Texas government agencies use artificial intelligence (AI), including ethical standards, risk assessments, and oversight tools. While the bill is well-intentioned in its pursuit of responsible AI governance, it ultimately represents an overreach in both scope and cost.

The bill significantly expands the size and authority of state government, particularly the Department of Information Resources, by granting it broad rulemaking power and oversight responsibilities. It creates new bureaucratic structures—including an advisory board and AI sandbox program—and requires the hiring of 10 new state employees, resulting in a projected fiscal impact of $7.2 million over the next biennium and ongoing costs exceeding $4 million per year.

Additionally, the bill places heavy regulatory burdens on private-sector vendors, requiring them to comply with new reporting, disclosure, and ethics standards or face contract termination. This could discourage small businesses from engaging in public-sector innovation. The bill also opens the door to regulatory creep, as its provisions delegate substantial rulemaking authority to the executive branch, potentially resulting in future overregulation without legislative checks.

While protecting the public from AI-related harm is a worthy goal, this bill does so at the expense of fiscal responsibility, limited government, and a healthy free enterprise system. For these reasons, Texas Policy Research recommends that lawmakers vote NO on SB 1964.

  • Individual Liberty: The bill aims to protect individual rights by requiring government agencies to avoid using AI systems in ways that result in “unlawful harm,” particularly in decisions that affect access to public services. It mandates oversight and transparency, which can help prevent algorithmic bias and protect civil liberties. However, the effectiveness of these protections hinges on implementation, and without clear redress or opt-out mechanisms for affected individuals, the protection of liberty remains limited and procedural, not personal or enforceable.
  • Personal Responsibility: The bill does not directly encourage or discourage personal responsibility. Its provisions are largely organizational and regulatory in nature, focused on agency compliance rather than individual action or accountability. That said, by clarifying ethical standards and requiring internal oversight, it may enhance responsibility within government institutions, though not in a way that empowers individuals.
  • Free Enterprise: The bill places significant new regulatory burdens on private vendors that provide AI systems to government agencies. These include mandatory risk assessments, impact reports, disclosures, and potential contract penalties for non-compliance. These requirements may discourage small businesses and startups from entering the public sector marketplace, effectively tilting the playing field toward larger firms with the resources to absorb compliance costs. This could stifle innovation and competition, contradicting the liberty principle of Free Enterprise.
  • Private Property Rights: The bill does not directly affect private property ownership or usage. However, by regulating data governance and requiring state disclosure of certain high-value datasets, it introduces government-managed transparency in digital spaces that might overlap with proprietary systems. These concerns are indirect but could grow if future rulemaking encroaches on private-sector data use and AI development.
  • Limited Government: This is where the bill has the most significant negative impact. The bill expands the role of the Department of Information Resources (DIR), grants it broad rulemaking authority, creates new programs and administrative layers (like an advisory board and sandbox), and empowers the state to monitor and potentially penalize vendors. These actions increase the scope, size, and regulatory reach of government, moving beyond narrowly defined administrative duties into active governance of emerging technologies—an area that should be carefully limited and legislative, not administrative.
View Bill Text and Status