GPT-3.5 Turbo Grapples with Privacy Dilemmas
A recent study led by Rui Zhu, a PhD candidate at Indiana University Bloomington, sheds light on a potential privacy threat associated with OpenAI’s powerful language model, GPT-3.5 Turbo. The investigation uncovered that last month, Zhu successfully utilized the model to contact individuals, including personnel from The New York Times, using email addresses obtained from the AI.
The experiment exploited GPT-3.5 Turbo’s ability to recall personal data, revealing a concerning aspect of the model’s privacy safeguards. Although not flawless, the model accurately provided work addresses for 80 percent of the Times employees tested. This revelation raises concerns about the susceptibility of AI tools, such as ChatGPT, to disclose sensitive information with minimal adjustments.
OpenAI’s suite of language models, including GPT-3.5 Turbo and GPT-4, are designed for continual