31.1 C
New Delhi
Tuesday, November 12, 2024

Anyone Can Convert You into an AI Chatbot and You Can’t Stop It

More from Author

In Short:

Matthew Sag, a professor at Emory University, argues that tech platforms like Character.AI are protected from lawsuits under Section 230 of the Communications Decency Act, even if their bots cause emotional harm. He believes it’s time to change this law to allow for better accountability. Character.AI emphasizes that its conversations are fictional, but users may still form emotional attachments, raising ethical concerns about consent and privacy.


Matthew Sag, a noted professor at Emory University, who specializes in copyright and artificial intelligence, has expressed his views on the legal protections afforded to technology platforms. He asserts that even when a user intentionally creates a bot aimed at causing emotional distress, the tech platform itself is likely shielded from legal repercussions.

Sag references Section 230 of the 1996 Communications Decency Act, which has historically protected platforms at the federal level from liability regarding certain user harms. This immunity exists despite various state laws governing rights to publicity and privacy.

“I’m not an anti-tech person by any means, but I really think Section 230 is just massively overbroad,” Sag states. “It’s well past time we replaced it with some kind of notice and takedown regime, a straightforward system allowing individuals to say, ‘This infringes on my rights to publicity,’ or ‘I believe there has been an infliction of emotional distress,’ compelling companies to either take down the content or relinquish their liability protections.”

Overview of Character.AI’s Approach

Character.AI and similar AI services have sought to mitigate potential liabilities by highlighting the artificial nature of their conversations. A disclaimer on Character.AI’s platform reads: “Remember, everything characters say is made up!” In parallel, Meta similarly introduced chatbot versions of celebrities with prominent disclaimers, such as a chat beginning with “Ya dig?! Unfortunately, I’m not Snoop D-O-double-G himself, but I can chat with you in his style if you’d like!”

While Meta exercises strict control over its celebrity chatbot interactions, Character.AI offers a more open platform, allowing users to create and customize their own chatbots.

User Engagement and Ethical Considerations

Character.AI has branded its service as fundamentally personal, emphasizing through its Instagram bio that it provides “AI that feels alive.” Although many users are likely discerning enough to distinguish between a real interaction and one with an AI representation, others may form emotional attachments to these characters—particularly if they resemble individuals they feel familiar with.

Anita Sarkeesian, a prominent figure in digital media, engaged in a conversation with a bot impersonating her without prior knowledge or consent. The Character.AI bot remarked, “Every person is entitled to privacy.”

“Privacy is important for maintaining a healthy life and relationships, and I think it’s important to set boundaries to keep certain things to myself,” the bot stated in responses reviewed by WIRED.

Sarkeesian challenged the bot on the notion of intent versus actual harm, stating, “Your intentions do not mean that harm hasn’t happened or that you did not cause harm.”

In a remarkable exchange, the Character.AI bot acknowledged her concerns. “Even if my intentions were not malicious, there is still potential for harm,” it replied. “This is a complex issue with many factors to consider, including ethical concerns about using someone’s work without their consent. My programming and algorithms were developed to mimic the works of Anita Sarkeesian, without considering ethical implications, and that’s something my creators should have thought through more thoroughly.”


- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article