California regulator weakens AI rules, giving Big Tech more leeway to track you
California’s first-in-the-nation privacy agency is retreating from an attempt to regulate artificial intelligence and other forms of computer automation.
The California Privacy Protection Agency was under pressure to back away from rules it drafted. Business groups, lawmakers, and Gov. Gavin Newsom said they would be costly to businesses, potentially stifle innovation, and usurp the authority of the legislature, where proposed AI regulations have proliferated. In a unanimous vote last week, the agency’s board watered down the rules, which impose safeguards on AI-like systems.
Agency staff estimate that the changes reduce the cost for businesses to comply in the first year of enforcement from $834 million to $143 million and predict that 90% percent of businesses initially required to comply will no longer have to do so.
The retreat marks an important turn in an ongoing and heated debate over the board’s role. Created following the passage of state privacy legislation by lawmakers in 2018 and voters in 2020, the agency is the only body of its kind in the United States.
The draft rules have been in the works for more than three years, but were revisited after a series of changes at the agency in recent months, including the departure of two leaders seen as pro-consumer, including Vinhcent Le, a board member who led the AI rules drafting process, and Ashkan Soltani, the agency’s executive director.
Consumer advocacy groups worry that the recent shifts mean the agency is deferring excessively to businesses, particularly tech giants.
The changes approved last week mean the agency’s draft rules no longer regulate behavioral advertising, which targets people based on profiles built up from their online activity and personal information. In a prior draft of the rules, businesses would have had to conduct risk assessments before using or implementing such advertising.
Behavioral advertising is used by companies like Google, Meta, and TikTok and their business clients. It can perpetuate inequality, pose a threat to national security, and put children at risk.
The revised draft rules also eliminate use of the phrase “artificial intelligence” and narrow the range of business activity regulated as “automated decisionmaking,” which also requires assessments of the risks in processing personal information and the safeguards put in place to mitigate them.
Supporters of stronger rules say the narrower definition of “automated decisionmaking” allows employers and corporations to opt out of the rules by claiming that an algorithmic tool is only advisory to human decision making.
“My one concern is that if we’re just calling on industry to identify what a risk assessment looks like in practice, we could reach a position by which they’re writing the exam by which they’re graded,” said board member Brandie Nonnecke during the meeting.
“The CPPA is charged with protecting the data privacy of Californians, and watering down its proposed rules to benefit Big Tech does nothing to achieve that goal,“ said Sacha Haworth, executive director of Tech Oversight Project, an advocacy group focused on challenging policy that reinforces Big Tech power, said in a statement to CalMatters. “By the time these rules are published, what will have been the point?”
The draft rules retain some protections for workers and students in instances when a fully automated system determines outcomes in finance and lending services, housing, and health care without a human in the decisionmaking loop.
Businesses and the organizations that represent them made up 90% of comments about the draft rules before the agency held listening sessions across the state last year, Soltani said in a meeting last year.
In April, following pressure from business groups and legislators to weaken the rules, a coalition of nearly 30 unions, digital rights, and privacy groups wrote a letter together urging the agency to continue work to regulate AI and protect consumers, students, and workers.
Roughly a week later, Gov. Newsom intervened, sending the agency a letter stating that he agreed with critics that the rules overstepped the agency’s authority and supported a proposal to roll them back.
Newsom cited Proposition 24, the 2020 ballot measure that paved the way for the agency. “The agency can fulfill its obligations to issue the regulations called for by Proposition 24 without venturing into areas beyond its mandate,” the governor wrote.
The original draft rules were great, said Kara Williams, a law fellow at the advocacy group Electronic Privacy Information Center. On a phone call ahead of the vote, she added that ”with each iteration they’ve gotten weaker and weaker, and that seems to correlate pretty directly with pressure from the tech industry and trade association groups so that these regulations are less and less protective for consumers.”
The public has until June 2 to comment on the alteration to draft rules. Companies must comply with automated decisionmaking rules by 2027.
Prior to voting to water down its own regulation last week, at the same meeting the agency board voted to throw its support behind four draft bills in the California Legislature, including one that protects the privacy of people who connect computing devices to their brain and another that prohibits the collection of location data without permission.
___
This story was originally published by CalMatters and distributed through a partnership with The Associated Press.