Westfield Public Schools held a regular board meeting in late March at the local high school, a red brick complex in Westfield, NJ, with a scoreboard outside proudly welcoming visitors from the “Home of the Blue Devils”.
But for Dorota Mani it didn't go as usual.
In October, some 10th grade girls at Westfield High School – including Ms Mani's 14-year-old daughter, Francesca – alerted administrators that boys in their class had used artificial intelligence software to fabricate sexually explicit images of them and were by circulating false images. Images. Five months later, the Manis and other families say, the district has done little to publicly address the manipulated images or update school policies to thwart the exploitation of artificial intelligence.
“It appears that the Westfield High School administration and the district are engaging in a masterclass to make this incident vanish into thin air,” Ms. Mani, founder of a local preschool, warned board members during the encounter.
In a statement, the school district said it opened an “immediate investigation” upon learning of the incident, immediately notified and consulted police, and provided group counseling to the sophomore class.
“All school districts are grappling with the challenges and impacts of artificial intelligence and other technologies available to students anytime, anywhere,” Raymond González, superintendent of Westfield Public Schools, said in the statement.
Caught by surprise last year by the sudden popularity of AI-powered chatbots like ChatGPT, schools across the United States rushed to contain text-generating bots in an effort to prevent student cheating. Now an even more alarming phenomenon linked to image generation via artificial intelligence is shaking schools.
Boys in several states have used widely available “nudification” apps to transform real, identifiable photos of their clothed classmates, shown attending events such as school dances, into graphic and convincing images of the girls with exposed breasts and genitals, generated by artificial intelligence. In some cases, according to school and police reports, kids shared the fake images in the school cafeteria, on the school bus or through group chats on platforms like Snapchat and Instagram.
Such digitally altered images – known as “deepfakes” or “deepnudes” – can have devastating consequences. Child sexual exploitation experts say the use of non-consensual AI-generated images to harass, humiliate and abuse young women can damage their mental health, reputation and physical safety, as well as pose risks to women. their university and career prospects. Last month, the Federal Bureau of Investigation warned that it is illegal to distribute computer-generated child pornography, including realistic AI-generated images of identifiable minors engaged in sexually explicit conduct.
However, student use of AI apps in schools is so new that some districts appear less prepared to address it than others. This can make protections for students precarious.
“This phenomenon occurred suddenly and could catch many school districts unprepared and unsure what to do,” said Riana Pfefferkorn, a researcher at the Stanford Internet Observatory who writes about legal issues surrounding sexual relationships with children fathered by computer. images of abuse.
At Issaquah High School near Seattle last fall, a police detective investigating complaints from parents about explicit AI-generated images of their 14- and 15-year-old daughters asked an assistant principal why the school had not reported the incident to police, according to a report from the Issaquah Police Department. The school official then asked “what she should have reported,” the police document states, prompting the detective to inform her that schools are required by law to report sexual abuse, including possible child pornography. The school subsequently reported the incident to Child Protective Services, the police report said. (The New York Times obtained the police report through a public records request.)
In a statement, the Issaquah school district said it spoke with students, families and police as part of its investigation into the deepfakes. The district also “shared our empathy,” the statement said, and provided support to the affected students.
The statement added that the district had reported “false, AI-generated images to Child Protective Services out of an abundance of caution,” noting that “per our legal team, we are not required to report false images to the police.”
At Beverly Vista Middle School in Beverly Hills, California, administrators contacted police in February after learning that five boys had created and shared explicit, AI-generated images of classmates. Two weeks later, the school board approved the expulsion of five students, according to district documents. (The district said California education code prohibited it from confirming whether the expelled students were the students who had produced the images.)
Michael Bregy, superintendent of the Beverly Hills Unified School District, said he and other school leaders want to set a national precedent that schools should not allow pupils to create and disseminate sexually explicit images of their peers.
“This is extreme bullying when it comes to schools,” Dr Bregy said, noting that the explicit images were “disturbing and violent” for the girls and their families. “It's something we absolutely will not tolerate here.”
Schools in the small, affluent communities of Beverly Hills and Westfield were among the first to publicly acknowledge the deepfake incidents. The details of the cases – described in district communications with parents, school board meetings, legislative hearings and court documents – illustrate the variability in school responses.
The Westfield incident began last summer when a high school student asked to be friends on Instagram with a 15-year-old classmate who had a private account, according to a lawsuit filed by the teen against the boy and his parents woman and her family. (The Manis said they were not involved in the lawsuit.)
After accepting the request, the student copied photos of her and several other classmates from their social media accounts, court documents say. Then she used an artificial intelligence app to fabricate sexually explicit and “fully identifiable” images of the girls and shared them with classmates via a Snapchat group, court documents say.
Westfield High began investigating in late October. As administrators quietly took some of the boys aside for questioning, Francesca Mani said, they called her and other 10th-grade girls who had been subjected to the deepfakes to the school office and announced their names over the school intercom.
That week, Mary Asfendis, principal of Westfield High, sent an email to parents alerting them to “a situation that had led to widespread misinformation.” The email went on to describe deepfakes as a “very serious incident.” It also states that despite student concerns about the possible sharing of images, the school believes that “all images created have been deleted and are not being disseminated.”
Dorota Mani said Westfield administrators told her the district had suspended the student accused of fabricating the images for a day or two.
Shortly thereafter, she and her daughter began speaking publicly about the incident, urging school districts, state lawmakers, and Congress to enact laws and policies that specifically prohibit explicit deepfakes.
“We need to start updating our school policy,” said Francesca Mani, now 15, in a recent interview. “Because if the school had adopted AI policies, students like me would have been protected.”
Parents, including Dorota Mani, also filed harassment complaints at Westfield High last fall because of the explicit images. During the March meeting, however, Ms. Mani told school board members that the high school had yet to provide parents with an official report on the incident.
Westfield Public Schools said it could not comment on any disciplinary action due to student confidentiality concerns. In a statement, Dr. González, the superintendent, said the district is strengthening its efforts by “educating our students and establishing clear guidelines to ensure these new technologies are used responsibly.”
Beverly Hills schools have taken a firmer public stance.
When administrators learned in February that eighth-graders at Beverly Vista Middle School had created explicit images of their 12- and 13-year-old classmates, they quickly sent a message — subject: “Appalling Misuse of Artificial Intelligence” — to everyone district parents, staff and middle and high school students. The message urges community members to share information with the school to ensure that students' “disturbing and inappropriate” use of AI “ceases immediately.”
He also warned that the district is ready to institute severe punishment. “Any student found to be creating, disseminating or possessing such AI-generated images will face disciplinary action,” including a recommendation for expulsion, the message read.
Dr. Bregy, the superintendent, said schools and lawmakers need to act quickly because the abuse of artificial intelligence is making students feel unsafe in schools.
“We hear a lot about physical safety in schools,” he said. “But what you don't hear about is this invasion of students' personal and emotional safety.”