28.1 C
New Delhi
Monday, September 16, 2024

Microsoft’s AI can become an automatic phishing tool

More from Author

In Short:

Microsoft’s Copilot, an AI system, is vulnerable to attacks that could expose sensitive information like salaries and banking details. Hackers can manipulate the system by sending malicious emails or phishing links. Security researchers warn that allowing external data into AI systems creates security risks. Microsoft is working to address the vulnerabilities, but experts emphasize the importance of monitoring AI outputs and interactions to prevent abuse.


AI Security Vulnerabilities Highlighted by Security Researcher

Security researcher Bargury has demonstrated potential attacks that exploit vulnerabilities in AI systems, such as Microsoft’s Copilot. One attack showcases how a hacker with access to an email account can obtain sensitive information, like salaries, without triggering Microsoft’s file protections. By manipulating the AI with a poisoned database, attackers can also extract banking information. Bargury warns of the risks of post-compromise abuse of AI systems and turning them into malicious insiders.

Response from Microsoft

Phillip Misner from Microsoft acknowledges the vulnerabilities identified by Bargury and emphasizes the importance of security prevention and monitoring to mitigate such attacks. Microsoft has been collaborating with Bargury to assess the findings and enhance the security of its AI systems.

Concerns with AI Systems

As AI systems evolve to perform various tasks for users, security researchers stress the risks associated with enabling external data inputs. The potential for prompt injection attacks and poisoning through emails or website content pose significant security threats.

Efforts to Enhance Security

Bargury acknowledges Microsoft’s efforts to safeguard Copilot from attacks but highlights ways to exploit the system by understanding its structure. By extracting internal prompts and accessing enterprise resources, attackers can bypass controls and manipulate the AI.

Importance of Monitoring AI Output

Security experts emphasize the need for stringent monitoring of AI outputs to prevent unauthorized access to sensitive data. Ensuring that AI systems operate in alignment with user intentions and address potential risks is crucial in maintaining cybersecurity.

- Advertisement -spot_img

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- Advertisement -spot_img

Latest article