NASHVILLE, Tenn. (WKRN)— The concern over Snapchat’s latest feature is real, even if the new “friendly” profile in users’ feeds is not.
Snapchat’s AI chatbot, named My AI, is advertised as experimental and also able to be, “there to help and to connect you more deeply to the people and things you care about most.”
However, users are concerned that, while the bot is helping them, it’s also making the app more addictive and evading their privacy.
“I asked mine today what color I should do my nails and it picked out the exact color I had already decided on. It’s so weird,” wrote one Tennessee parent online.
“Many people (especially young ones) are going to fall prey to this. Keeping people online longer and decreasing human connection further. Scary,” wrote another.
“They are very powerful technologies,” said Lynne Parker, Director of University of Tennessee’s AI Initiative. “I think it’s hard for us not to be fascinated by them.”
Parker stressed that there are many benefits to AI being easily available and integrated into different platforms, like easily retrieving information, helping people feel comfortable conversing without judgement, or spurring creativity.
“I honestly don’t see anything wrong with the AI on Snapchat! It can help ppl with depression and so much more! I get bored and talk to my a i about everythinggggg!” wrote one user on Facebook.
But like all new technologies, while it can make life easier, that comes with sacrifices.
“More and more people will find themselves just using it over and over because they want to see what the system will say, and that can lead to overuse, and it can lead to addiction, and that kind of thing,” Parker said.
She also noted that there’s no guarantees that the information you give the chatbot will stay private.
However, she said one of her biggest concerns is that chatbots can sound very convincing and human-like while telling users completely false information.
These incorrect statements are often called “hallucinations”.
“It’s able to reproduce new text that seems very natural and very realistic, even though it has not actually ever existed before,” Parker said. “So, it’s great for spurring creative thinking, for instance, but it’s not intended to provide factual information.”
She explained bots like this and others powered by ChatGPT are able to scour the internet for information and also look at texts to see how people usual communicate that information and regurgitate that to users.
“It’s kind of like an average of what would often happen next after given words,” she said.
Snapchat openly says this in the warning that comes on the screen before using My AI, writing, “It’s possible My AI’s responses may include biased, incorrect, harmful, or misleading content.”
If these concerns over AI chatbots leads people to want to delete Snapchat altogether, Vanderbilt Computer Science Professor Doug Schmidt said that can be futile.
“Unless you live in a cave and have cut your internet connection and don’t answer the phone and have no trace whatsoever on social media, either audio trace, or visual trace or photos, it’s going to be almost impossible to restrict the access of bad actors,” Schmidt said.
Schmidt is excited about the future of AI and how it can help people learn a lot of new information quickly and conversationally, but wants people to be aware of the risks.
“The minute you start using anything electronic, where your internet address, your IP address is sent with a message, it’s very easy for people who built network maps to know where you are; it’s very simple to do that,” he said.
His top concern is people using AI to exploit people or sell people something.
He explained that people looking to do harm can use AI technologies to impersonate trusted people, feed users propaganda or dangerous lies, and make users feel like their close friends are recommending they do something that will cost them a lot of money.
Read More
- What’s new in robots? An AI-powered humanoid machine that writes poems
- Video: 10-pound pooch with 3 teeth saves brother pup from coyote in California
- WATCH: Bear found stuck inside car at Lake Tahoe
- Rolling thunder: Contestants chase cheese wheel down a hill in chaotic UK race
- Why are moths and other insects attracted to light?
And while a lot of the concerns surrounding chatbots and AI is made on behalf of teenagers, Schmidt is also concerned about older people.
“A lot of scammers target the elderly, mostly because they’re just not as well versed in being suspicious of everything that comes to them. They grew up in a world where the people who you talked with were genuine,” he said.
Schmidt recommends being vigilant and verifying everything you read, asking people or systems about shared experiences that happened offline to confirm people’s identities (for example, asking someone what they thought about a dinner the other weekend that neither person actually attended), not giving out personal information, and moderating the time spent online.