SACRAMENTO, Calif. – California became the first state to take concrete steps toward regulating generative artificial intelligence tools like ChatGPT, with Governor Gavin Newsom signing an executive order this week instructing state agencies to analyze risks and establish guidelines for use of the rapidly evolving technology.
The wide-ranging executive order directs the California Department of Technology, Office of Digital Innovation, and other key state agencies to identify potential benefits of using generative AI tools in state government operations within 60 days. Specifically, it tasks the agencies with assessing how AI could be applied to improve efficiency and accessibility of services for California residents.
Additionally, the order charges the agencies with thoroughly evaluating risks posed by the nascent technology, such as propagation of misinformation, embedding of biases, and threats to vulnerable communities and critical infrastructure like the energy grid. The agencies have until January 2024 to develop comprehensive procurement standards, training programs, and ethical guidelines for responsible state use of approved AI systems.
“We recognize both the tremendous potential benefits and profound risks these tools enable,” said Newsom in a statement announcing the order. “We’re taking a clear-eyed, humble, and cautious approach to steering this world-changing technology toward the common good.”
While stopping short of imposing any binding regulations on AI developers or users, the executive order lays groundwork for California to exert national leadership in shaping AI policy, even as the federal government continues deliberating its own approach. It follows a series of high-level meetings between Newsom’s office and executives from leading AI companies regarding safe and ethical deployment of the technology.
Under the executive order, Stanford University and University of California Berkeley will aid state-commissioned research into workforce impacts, while the state forms partnerships with historically Black colleges and universities to ensure AI does not perpetuate harms to marginalized groups.
Additional legislation may follow at the state level to address persisting risks associated with generative AI, such as propagation of child sexual abuse imagery and use of powerful deepfake technology to spread political disinformation. However, some industry groups caution against prematurely stifling beneficial innovation with restrictive regulations.
For now, California’s proactive yet “driver’s seat” approach provides a regulatory roadmap other states across America may follow, as pressure mounts for guardrails to prevent generative AI from causing societal harms. The executive order demonstrates the Golden State’s leadership in tackling fast-moving technological change for the public good.