Cybersecurity experts have warned about a significant privacy risk associated with AI chatbots like ChatGPT from OpenAI. When users upload images featuring their company logos and details, the system retains these details indefinitely, potentially leading to impersonation scams if intercepted by malicious actors.
For instance, when a user submits an image of themselves wearing a company badge along with information about their role in a text query to OpenAI’s ChatGPT, the chatbot uses this data to create a caricature and gathers specific insights into the individual’s identity. This includes understanding the person’s mood or surroundings depicted in the photo as well as geographic details which could link them back to their workplace.
The severity of this issue was highlighted by Charlotte Wilson, Check Point’s Head of Enterprise, who noted that if data breaches occur within OpenAI, sensitive information such as images and personal details collected by the chatbot can be exploited. If these materials are misused, they could be used to fabricate fake social media profiles or create lifelike AI deepfakes designed for fraudulent purposes.
Users are advised to refrain from sharing any photos that reveal their identity or location clues related to their workplace. Backgrounds should remain neutral, and no personal information, such as job titles or company names, should accompany the prompts. Users can also opt out of having their conversations used in training by disabling OpenAI’s “improve the model for everyone” feature.
While European Union law grants users the right to delete their data collected by companies like OpenAI, the firm maintains that it might retain certain information even after deletion to safeguard against potential security risks. These measures underscore a complex balance between enhancing utility and protecting user privacy in an era of increasingly sophisticated AI applications.
In summary, users must be vigilant about the content they share with these chatbots to prevent their personal data from falling into the wrong hands and being weaponized for malicious ends.


