[ad_1]

OpenAI is looking for a new director responsible for studying emerging AI-related risks in areas ranging from computer security to mental health.

In a message on XCEO Sam Altman acknowledged that AI models are « beginning to present some real challenges, » including the « potential impact of models on mental health, » as well as models that are « so good at computer security that they’re starting to discover critical vulnerabilities. »

“If you want to help the world figure out how to provide cybersecurity defenders with cutting-edge capabilities while ensuring that attackers can’t use them to do harm, ideally by making all systems more secure, and similarly to how we release biological capabilities and even gain confidence in the security of running systems that can improve themselves, please consider signing up,” Altman wrote.

OpenAIs list for the role of chief preparedness officer describes the role as one responsible for executing the company’s preparedness framework, “our framework that explains OpenAI’s approach to tracking and preparing for frontier capabilities that pose new risks of serious harm.”

The company first announced the creation of a preparedness team in 2023, and said it would be responsible for studying potential “catastrophic risks,” whether they were more immediate, such as phishing attacks, or more speculative, such as nuclear threats.

Less than a year later, OpenAI has reassigned head of preparedness Aleksander Madry to a job focused on AI reasoning. Other security managers at OpenAI have done the same left the company or took on new roles beyond preparedness and safety.

The company recently too updated its preparedness frameworkstating that it could « adjust » its security requirements if a competing AI lab releases a « high risk » model without similar protections.

Techcrunch event

San Francisco
|
October 13-15, 2026

As Altman noted in his post, generative AI chatbots are coming under increasing scrutiny regarding their impact on mental health. Recent lawsuits claim that OpenAI’s ChatGPT reinforced users’ delusions, increased their social isolation, and even led some to suicide. (The company said it continues to work on improving ChatGPT’s ability to recognize signs of emotional distress and connect users with real-world support.)

[ad_2]

Source link

Comments

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Se connecter

S’inscrire

Réinitialiser le mot de passe

Veuillez saisir votre identifiant ou votre adresse e-mail. Un lien permettant de créer un nouveau mot de passe vous sera envoyé par e-mail.