Caroline Mullet, a sixth-grader at Issaquah High School near Seattle, went to her first homecoming dance last fall, a James Bond-themed party with blackjack tables attended by hundreds of girls dressed in dresses party.
Weeks later, she and other students learned that a classmate was circulating fake nude images of girls who had attended the dance, sexually explicit images that he had fabricated using an artificial intelligence app designed to automatically “undress” girls. photos dressed as royalty. girls and women.
Ms. Mullet, 15, alerted her father, Mark, a Democratic senator from Washington state. Although she was not among the girls in the photos, she asked if anything could be done to help her friends, who felt “extremely uncomfortable” that male classmates had seen simulated nude images of them . Soon, Senator Mullet and a State House colleague proposed legislation to ban the sharing of explicit, AI-generated sexual depictions of real minors.
“I hate the idea of having to worry about this happening again to any of my friends, to my sisters, or even to myself,” Ms. Mullet told state lawmakers during a hearing on the bill in January.
The state legislature passed the bill without opposition. Gov. Jay Inslee, a Democrat, signed it into law last month.
States are on the front lines of the rapid spread of a new form of sexual exploitation and peer harassment in schools. Teens across the United States used widely available “nudification” apps to surreptitiously make up sexually explicit images of their classmates and then circulated the simulated nudes via group chats on apps like Snapchat and Instagram.
Now, spurred in part by troubling reports from teenage girls like Ms. Mullet, federal and state lawmakers are rushing to put protections in place in an effort to keep up with the AI apps they exploit.
Since early last year, at least two dozen states have introduced bills to combat AI-generated sexually explicit images — known as deepfakes — of people under 18, according to data compiled by the National Center for Missing & Exploited Children, a non-profit organization. And several states have adopted the measures.
Among them, South Dakota this year passed a law making it illegal to possess, produce or distribute AI-generated sexual abuse material that depicts real minors. Last year, Louisiana enacted a deepfake law that criminalizes AI-generated sexually explicit depictions of minors.
“I had a sense of urgency hearing about these cases and how much damage had been done,” said Rep. Tina Orwall, a Democrat who drafted Washington state's explicit-deepfake law after hearing about incidents like the one at Issaquah High.
Some lawmakers and child protection experts say such rules are urgently needed because the easy availability of AI-powered nudity apps is enabling the mass production and distribution of false graphic images that can potentially circulate online throughout the lives, threatening girls' mental health, reputation and physical health. safety.
“One boy with his phone over the course of an afternoon can victimize 40 girls, minors,” said Yiota Souras, legal director of the National Center for Missing and Exploited Children, “and then their images are available.”
Over the past two months, nude deepfake incidents have spread to schools, including Richmond, Illinois, Beverly Hills, and Laguna Beach, California.
Yet few laws in the United States specifically protect children under 18 from exploitation by AI apps.
That's because many current statutes that prohibit child pornography or nonconsensual adult pornography — which involve real photos or videos of real people — may not cover explicit AI-generated images that use real people's faces, said U.S. Rep. Joseph D Morelle, a Democrat. From New York.
Last year he introduced a bill that would make it a crime to disclose AI-generated intimate images of identifiable adults or minors. It would also give deepfake victims, or parents, the right to sue individual perpetrators for damages.
“We want this to be so painful for anyone who would even think about doing this, because this is harm that you can't just undo,” Morelle said. “Even if it seems like a joke to a 15-year-old, it's deadly serious.”
U.S. Rep. Alexandria Ocasio-Cortez, another New York Democrat, recently introduced a similar bill to allow victims to file civil lawsuits against deepfake perpetrators.
But neither bill would explicitly give victims the right to sue developers of AI-based nudity apps, a step that trial lawyers say would help stop the mass production of sexually explicit deepfakes.
“Legislation is needed to stop the commercialization, which is the root of the problem,” said Elizabeth Hanley, a Washington lawyer who represents victims in sexual assault and harassment cases.
The United States legal code prohibits the distribution of computer-generated child pornography that depicts identifiable minors engaging in sexually explicit conduct. Last month, the Federal Bureau of Investigation issued an advisory warning that such illegal material included realistic, AI-generated images of child sexual abuse
However, AI-generated false depictions of real, unclothed teenage girls may not constitute “child sexual abuse material,” experts say, unless prosecutors can demonstrate that the fake images meet the legal standards for sexually explicit conduct or obscenity of the genitals.
Some defense lawyers have tried to take advantage of the apparent legal ambiguity. An attorney defending a high school student in a deepfake lawsuit in New Jersey recently argued that the court should not temporarily block his client, who had created AI nude images of a classmate, from viewing or sharing the images because they were neither harmful nor illegal. Federal laws, the lawyer argued in a court filing, were not designed to apply “to synthetic computer-generated images that do not even include actual parts of the human body.” (The defendant ultimately agreed not to object to a restraining order on the images.)
Now states are working to pass laws to stop the exploitation of AI-based imagery. This month, California introduced a bill to update the state's ban on child pornography to specifically cover abusive material generated by artificial intelligence.
And Massachusetts lawmakers are finalizing legislation that would criminalize the nonconsensual sharing of explicit images, including deepfakes. It would also require a state agency to develop a diversion program for minors who share explicit images to teach them about issues such as “responsible use of generative artificial intelligence.”
Punishments can be severe. Under Louisiana's new law, anyone who knowingly creates, distributes, promotes or sells sexually explicit deepfakes of minors can face a minimum prison sentence of five to 10 years.
In December, Miami-Dade County police officers arrested two middle school boys for allegedly making and sharing false AI nude images of two classmates, ages 12 and 13, according to documents police obtained by The New York Times through a public records request. The boys were charged with third-degree felonies under a 2022 state law that bans altered sexual representations without consent. (The Miami-Dade County State's Attorney's Office said it could not comment on an open case.)
The new deepfake law in Washington State takes a different approach.
After learning of the incident at Issaquah High from his daughter, Senator Mullet contacted Congressman Orwall, an advocate for sexual assault survivors and former social worker. Ms. Orwall, who had worked on one of the state's first revenge porn bills, then authored a House bill to ban the distribution of AI-generated intimate or sexually explicit images of minors or adults. (Mr. Mullet, who sponsored the Senate bill, is now running for governor.)
Under the resulting law, first-time offenders could be charged with misdemeanors, while people with prior convictions for disseminating sexually explicit images would face felony charges. The new deepfake statute will take effect in June.
“It's not shocking that we're behind on protections,” Ms. Orwall said. “That's why we wanted to move forward so quickly.”