1. SEJ
  2.  ⋅ 
  3. Generative AI

OpenAI’s Sam Altman Says Personalized AI Raises Privacy Concerns

Sam Altman predicts that AI security will become the defining problem of the next phase of AI, citing personalized AI as an area of concern.

OpenAI’s Sam Altman Says Personalized AI Raises Privacy Concerns

In a recent interview with Stanford University, OpenAI’s CEO Sam Altman predicted that AI security will become the defining problem of the next phase of AI development, saying that AI security is one of the best fields to study right now. He also cited personalized AI as one example of a security concern that he’s been thinking about lately.

What Does AI Security Mean Today?

Sam Altman said that concerns about AI safety will be reframed as AI Security issues that can be solvable by AI.

Interview host, Dan Boneh, asked:

“So what does it mean for an AI system to be secure? What does it mean for even trying to kind of make it do things it wasn’t designed to do?

How do we protect AI systems from prompt injections and other attacks like that? How do you think of AI security?

I guess the concrete question I want to ask is, among all the different things we can do with AI, this course is about learning one sliver of the field. Is this a good area? Should people go into this?”

Sam Altman encouraged today’s students to study AI security.

He answered:

“I think this is one of the best areas to go study. I think we are soon heading into a world where a lot of the AI safety problems that people have traditionally talked about are going to be recast as AI security problems in different ways.

I also think that given how capable these models are getting, if we want to be able to deploy them for wide use, the security problems are going to get really big. You mentioned many areas that I think are super important to figure out. Adversary robustness in particular seems like it’s getting quite serious.”

What Altman means is that people are starting to find ways to trick AI systems, and the problem is becoming serious enough that researchers and engineers need to focus on making AI resistant to manipulation and other kinds of attacks, such as prompt injections.

AI Personalization Becoming A Security Concern

Altman also said that something he’s been thinking a lot about lately is possible security issues with AI personalization. He said that people appreciate personalized responses from AI but he said that this could open the door to malicious hackers figuring out how to steal sensitive data (exfiltrate).

He explained:

“One more that I will mention that you touched on a little bit, but just it’s been on my mind a lot recently. There are two things that people really love right now that taken together are a real security challenge.

Number one, people love how personalized these models are getting. So ChatGPT now really gets to know you. It personalizes over your conversational history, your data you’ve connected to it, whatever else.

And then number two is you can connect these models to other services. They can go off and like call things on the web and, you know, do stuff for you that’s helpful.

But what you really don’t want is someone to be able to exfiltrate data from your personal model that knows everything about you.

And humans, you can kind of trust to be reasonable at this. If you tell your spouse a bunch of secrets, you can sort of trust that they will know in what context what to tell to other people. The models don’t really do this very well yet.

And so if you’re telling like a model all about your, you know, private health care issues, and then it is off, and you have it like buying something for you, you don’t want that e-commerce site to know about all of your health issues or whatever.

But this is a very interesting security problem to solve this with like 100% robustness.”

Altman identifies personalization as both a breakthrough and a new opening for cyber attack. The same qualities that make AI more useful also make it a target, since models that learn from individual histories could be manipulated to reveal them. Altman shows how convenience can become a source of exposure, explaining that privacy and usability are now security challenges.

Lastly, Altman circled back to AI as both the security problem and the solution.

He concluded:

“Yeah, by the way, it works both directions. Like you can use it to secure systems. I think it’s going to be a big deal for cyber attacks at various times.”

Takeaways

  • AI Security As The Next Phase Of AI Development
    Altman predicts that AI security will replace AI safety as the central challenge and opportunity in artificial intelligence.
  • Personalization As A New Attack Surface
    The growing trend of AI systems that learn from user data raises new security concerns, since personalization could expose opportunities for attackers to extract sensitive information.
  • Dual Role Of AI In Cybersecurity
    Altman emphasizes that AI will both pose new security threats and serve as a powerful tool to detect and prevent them.
  • Emerging Need For AI Security Expertise
    Altman’s comments suggests that there will be a rising demand for professionals who understand how to secure, test, and deploy AI responsibly.

Watch Altman speak at about the 15 minute mark:

Category News Generative AI
SEJ STAFF Roger Montti Owner - Martinibuster.com at Martinibuster.com

I have 25 years hands-on experience in SEO, evolving along with the search engines by keeping up with the latest ...