Assemblyman Jake Blumencranz, of Oyster Bay, has thrust two pivotal pieces of legislation into the spotlight targeting the reprehensible use of artificial intelligence in generating deepfake and pornographic material, particularly involving children.
During the current legislative session, Blumencranz, a Republican representing the 15th Assembly District, proposed the “Swift Act” and the “New York AI Child Safety Act,” to
safeguard New Yorkers from the pernicious effects of deepfake technology.
He emphasized the urgency of protecting vulnerable populations, especially children, in light of the proliferation of AI-generated content across the internet.
“As technology progresses, and we start to see new threats to both consumers and just members of the general public, it is important for state legislators to really take the wheel when it comes to legislating on these topics in a robust manner,” Blumencranz said. “We need to make sure that we’re protecting everybody without sacrificing what could be innovation and industry.”
The “New York AI Child Safety Act” seeks to ramp up criminal penalties for individuals involved in the creation or distribution of AI-generated child pornography. Blumencranz underscored the necessity of this legislation, citing a recent case in Nassau County where a perpetrator received only a six-month sentence for creating sexually explicit deepfakes of former classmates. The bill aims to rectify this by elevating such crimes to felony status, empowering law enforcement to pursue more substantial penalties for offenders.
In response to the alarming dissemination of sexually explicit deepfakes, including those targeting public figures like Taylor Swift, he introduced the “Swift Act.” This legislation would compel social media platforms to promptly remove unlawful publications of intimate images, offering victims a path for punitive action against perpetrators. The bill also proposes an increase in criminal penalties for such misconduct, transforming it from a misdemeanor to a felony.
The assemblyman’s commitment to regulatory and penal safeguards reflects a universal concern over the intersection of AI and criminal exploitation. Blumencranz acknowledged the challenge of legislating in a rapidly evolving technological landscape but stressed the importance of proactive measures to ensure the safety and security of all New Yorkers.
Former Oyster Bay resident Hope Taglitch raised concerns about the ethical implications of AI and its potential for abuse. Taglitch, a teacher in the Bronx, expressed mixed feelings about AI, highlighting its dual nature as both a helpful tool and a potential threat to privacy and autonomy.
“Fundamentally, it definitely infringes on bodily autonomy and it’s able to kind of fill gaps in revenge porn laws, where we don’t even need hackers anymore, we just need nonconsensually-distributed images that are dreamed up by an AI,” Taglitch said. “For someone like Taylor Swift, that’s a horrifying experience and a violation. But for someone that doesn’t have her resources, an ex-boyfriend can use that and you’re out of a job.”
She emphasized the need for comprehensive approaches that not only increase penalties for offenders but also address corporate accountability and technological solutions to combat deepfake abuse.
While Blumencranz’s legislation marks a significant step forward in protecting New Yorkers from AI-enabled exploitation, there remains a call for broader cooperation and innovation in tackling this multifaceted issue.
Jaiya Chetram, a college student from Oyster Bay and member of the Screen Actors Guild-American Federation of Television and Radio Artists, added that issues regarding AI are becoming a larger problem as the technology continues to rapidly develop.
“I absolutely think that AI imagery has become a serious and rapidly growing problem amongst people my age,” Chetram said. “It’s just so wildly dangerous, ranging from harassment to violence. It can seriously ruin someone mentally.”
In addition to legislative action, there is growing consensus on the need for collaboration between government agencies, technology companies, and advocacy groups to develop comprehensive strategies for combating AI-related crimes. This includes implementing robust algorithms and detection tools to identify deepfake content and establishing clear protocols for reporting and removing illicit material from online platforms.
Amandine Bourne, a college student and Oyster Bay resident, said that another aspect of AI deepfakes that concerned her was its ability to spread misinformation. Bourne added that, especially with a presidential election coming up, AI deepfakes have a massive potential to damage the trust people have in the nation’s leaders and institutions.
“For me, one of the most pernicious things about AI is that it further erodes the faith that anybody would have in traditional sources of information, and once that faith is gone, people don’t base their opinions on facts,” Bourne continued. “So by discrediting even a little bit of the information, it casts doubt on everything, and it becomes extremely difficult to make intelligent decisions based on the information that you have.”