OAK RIDGE, Tenn. (WATE) — The Oak Ridge National Laboratory announced the opening of a new center to study the effects of artificial intelligence Wednesday.
Scientists at the Center for AI Security Research, or CAISER, will focus on the vulnerabilities of AI and the risks posed by the technology, as it becomes more widely adopted.
Edmon Begoli is the section head of AI systems at ORNL, and will now serve as the director of CAISER.
“I would say the key mission that the CAISER is going after is to think ahead and prevent misuse, reduce risks and reduce threats coming from AI,” Begoli said.
The scientists at ORNL are no strangers to artificial intelligence, the new center is an expansion of the lab’s long-standing Artificial Intelligence for Science and National Security research initiative.
According to Begoli, AI has been studied since the fifties, but has only recently began to develop quickly.
“It is a relatively new technology that is being extremely widely adopted, and in some instances may be even rushed into being adopted,” he said. “Yet, it has number of ways how it can be fooled, exploited or misused. That is not necessarily uncommon because if you think about operating systems from the nineties and early 2000’s, they were not designed necessarily to be safe and secure against cyberattacks and cyber exploitations.”
With new technology, comes new risks. Amir Sadovnik helped found the center and said there are still a lot of unknowns surrounding AI.
“We don’t understand exactly the way it’s learning so there’s ways that we can fool it that are kind of unexpected. Almost like, you can think of it like an optical illusion for an AI system. So, there’s a lot of novel vulnerabilities that don’t exist with these more traditional systems,” Sadovnik said.
While excited about the possibilities of AI, Begoli also acknowledges the threat of it.
“The way how, we as humans like to misuse technology and cause harm, or how some things run out of control, I believe it’s possible and that’s honestly my latent motivation to do what I’m doing, is to understand if we can harm ourselves. If the AI can be harmful to humans, and what are the means to control it?” Begoli said.
“Deep fakes” are a form of artificial intelligence that have become a topic of concern as the technology continues to advance. Begoli believes they will only keep growing in terms of accessibility and quality.
“I predict that a year from now it would be plausible that one of us interacting in this video with you right now could be a deep fake. That, instead of this being me, it would be some zoom-bombing actor that could substitute my face with his body,” he explained. “You’d be talking with that particular actor that speaks in my voice and looks like me. We are not there yet, but we are this close.”