As we discussed in a previous article, the European Union has adopted the EU AI Act, a groundbreaking set of regulations aimed at ensuring transparency and accountability in the development and use of Artificial Intelligence (AI) throughout its member states. This risk-based approach has inspired lawmakers in multiple states in the US to adopt similar frameworks.
Here is a look into the state of AI legislation and how various state legislative bodies are focusing on transparency, accountability, and the ethical use of AI.
1. Comprehensive AI Regulation
One of the most prominent themes in AI legislation is the push for comprehensive regulation that addresses the broad impacts of AI technologies.
For example, the Consumer Protections in Interactions with Artificial Intelligence Systems law in Colorado, requires developers of high-risk artificial intelligence systems to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. The law goes into effect on February 1, 2026. Additionally, Texas lawmakers introduced the "Texas Responsible AI Governance Act" (TRAIGA), which adopts a risk-based approach to AI regulation, specifically targeting algorithmic discrimination by automated decision-making tools (ADMT).
These examples underscore a growing recognition of the need for robust regulatory frameworks that can adapt to the diverse applications of AI.
2. Generative AI and Transparency
Generative AI (GenAI), which includes technologies capable of creating content such as text, images, and videos, has sparked significant legislative interest. In this area, lawmakers have focused on ensuring transparency and preventing misuse.
For example, California’s "AI Transparency Act" would require entities providing GenAI tools to make available—at no cost to users—an "AI detection tool" (not defined) that allows individuals to query whether content was created or modified in whole, or in part, by the provided GenAI system. The bill also requires all providers of GenAI tools to include a latent disclosure that the content was produced by AI, with an option available to the user to include a more visible disclosure.
On the opposite coast, Massachusetts introduced HD 4788 which would mandate all GenAI systems to disclose AI-generated content, including metadata identification.
These legislative efforts are just two examples of many efforts aimed at protecting consumers and maintaining trust in AI-generated content by ensuring clear disclosures and preventing deceptive practices.
3. Preventing Bias When Using AI in Employment
The use of AI in employment decisions has raised concerns about bias and fairness, prompting several states to introduce legislation aimed at regulating automated employment decision tools (AEDT).
For instance, New Jersey’s bills S 1588 and A 4030 would prohibit the sale of AEDTs without conducting annual bias audits to ensure that these tools do not perpetuate discrimination in hiring practices. In addition, Illinois’ HB 3773 would amend the existing Illinois Human Rights Act to regulate the use of predictive data analytics in employment and credit decisions, prohibiting the use of proxies like race or zip code in those decisions. Finally, California’s Privacy Protection Agency, which has rule-making authority over ADMT technologies is also focused on ADMT that makes “significant decisions” such as in employment opportunities.
These types of bills and rule-making reflect a broader trend towards ensuring that AI technologies used in employment are fair, transparent, and accountable.
4. Promoting the Ethical Use of AI in Healthcare
AI's application in healthcare is another critical area of legislative focus, with bills aimed at ensuring the safe and ethical use of AI in medical settings.
One illustration is Texas's HB 1265, which would require licensed mental health providers to use only approved AI mental health applications, unless the patient consents to using AI in its development stage. Additionally, California's AB 3030 would mandate health facilities to disclose the use of generative AI in patient communications and ensure human review of AI-generated responses.
These examples are just two illustrations of stringent regulations aimed at protecting patient safety and privacy in the rapidly evolving field of health AI.
Conclusion
The 2024 legislative landscape for AI reflects a dynamic and multifaceted approach to regulating this transformative technology. From general AI regulation and transparency in GenAI to the ethical use of AI in employment and healthcare, state legislatures are striving to create a balanced framework that fosters innovation while protecting public interests. Legislative efforts will only get more concerted as lawmakers work to ensure the power of AI is harnessed responsibly and ethically.
To learn about UKG’s AI practices, visit our Responsible and Ethical Use of AI ESG page.