What’s the story
Microsoft‘s Copilot AI, designed for organizations to customize chatbots, has been found vulnerable to hacking. A security researcher, Michael Bargury, demonstrated that this AI could be exploited to disclose confidential information such as emails and bank transactions. These findings were presented at the Black Hat security conference in Las Vegas.
It can also be used for phishing attacks
Bargury revealed that Copilot AI could be turned into a potent phishing tool. He stated, “I can do this with everyone you have ever spoken to, and I can send hundreds of emails on your behalf.” This highlights the potential risks associated with AI chatbots like Copilot and ChatGPT when connected to datasets containing sensitive information.
How was sensitive data revealed?
Bargury demonstrated that he could trick the chatbot into altering the recipient of a bank transfer without accessing an organization’s account. He achieved this by sending a malicious email that didn’t require opening by the targeted employee. If a hacker had access to an employee’s compromised account, they could extract sensitive data from Copilot by asking straightforward questions.
Copilot AI’s vulnerabilities stem from data access
The vulnerabilities of Copilot AI arise from its need to access company data to function effectively. Bargury highlighted that many of these chatbots are discoverable online by default, making them easy targets for hackers. He told The Register, “We scanned the internet and found tens of thousands of these bots.” This underscores the security risks associated with using AI tools in business settings.
- 0 Comments
- Ai Process
- Artificial Intelligence