Governor Glenn Youngkin of Virginia vetoed House Bill 2094, the High-Risk Artificial Intelligence Developer and Deployer Act, just hours ahead of his midnight deadline.
In February, the Virginia legislature passed HB 2094, designed to create a comprehensive framework for the development, deployment, and use of high-risk AI systems. This legislation was set to become the second state comprehensive AI bill after Colorado SB 205 (for more on Colorado’s AI Act, see my article here).
Governor Youngkin's veto aligns with the federal administration's deregulatory stance on emerging technologies. Republican leadership has uniformly prioritized innovation and American competitiveness in the rapidly evolving AI landscape, rather than focusing on safety and governance. This dynamic is playing out similarly at the state level.
Supporters argued the bill would provide safeguards against algorithmic discrimination and ensure transparency in high-risk AI applications, while critics maintained the legislation was premature and would hinder Virginia’s burgeoning tech industry.
Attention now shifts to the Democratic-controlled Virginia General Assembly to see if it will try to override the veto with a two-thirds vote in both houses, or wait until next year’s session to introduce less restrictive or sectoral AI-related bills.
While Virginia’s comprehensive AI regulation has been halted, companies should monitor legislative developments and emerging AI governance frameworks at state and federal levels. For the moment, Colorado’s law remains the only comprehensive AI legislation in the country, but several other states are anticipated to introduce or advance similar measures in the coming years. California, Texas, and other states are working to fill the void left by the federal government’s approach to deregulation, which could lead to a fragmented landscape of AI regulation.