close
close

Microsoft reveals AI security vulnerability

A newly discovered vulnerability in artificial intelligence (AI) systems could pose a significant risk to e-commerce platforms, financial services, and customer service operations across industries. Microsoft has revealed details of a technique called a “backbone key” that can bypass ethical safeguards built into AI models used by companies around the world.

“Skeleton Key uses a multi-pivot (or multi-step) strategy to cause the model to ignore guardrails,” Microsoft explains in a blog post. The flaw could allow malicious users to manipulate AI systems to generate malicious content, provide inaccurate financial advice, or compromise the privacy of customer data.

The flaw affects AI models from major vendors, including Meta, Google, OpenAI, and others, which are widely used in commercial applications. The vulnerability raises concerns about the integrity of digital operations for online stores, banks, and customer service centers that use AI-powered chatbots and recommendation engines.

“This is a serious concern because of the broad impact it has on many underlying models,” Narayana Pappu, CEO of Zendata, told PYMNTS. “To mitigate this, companies should implement input/output filtering and set up abuse monitoring. This is also an opportunity to exclude malicious content from future releases of underlying models.”

AI-powered trade protection

In response to this threat, Microsoft has implemented new security measures across its AI services and is advising companies on how to protect their systems. For e-commerce companies using Azure AI services, Microsoft has enabled additional security measures by default.

“We recommend setting the most stringent threshold to provide the best protection against security breaches,” the company says, emphasizing the importance of stringent security measures for businesses that process sensitive customer data and financial transactions.

These types of protective measures are crucial to maintaining consumer trust in AI-powered shopping experiences, personalized financial services and automated customer service systems.

As Sarah Jones, cyberthreat research analyst at Critical Start, told PYMNTS, the danger with Skeleton Key is that it can trick AI models into generating malicious content.

“By feeding an AI model a cleverly crafted sequence of hints, attackers can convince the model to ignore security restrictions,” she said. “Malicious actors could exploit this functionality to generate malicious code, promote violence or hate speech, or even create deepfakes for malicious purposes. If AI-generated content turns out to be easily manipulated, trust in the technology could be undermined.”

Jones said companies that develop or use generative AI models need to take a layered defensive approach to mitigate these risks. One way is to implement input filtering systems that detect and block prompts with malicious intent. Another method is output filtering, where a system checks the generated AI content to prevent the release of malicious material. Additionally, companies should carefully craft prompts used to interact with AI, making sure they are clear and include safeguards.

“It’s also important to choose AI models that are inherently tamper-resistant,” Jones said. “Finally, companies should continually monitor their AI systems for signs of misuse and integrate AI security solutions into broader security frameworks. By taking these steps, companies can build more robust and trustworthy AI systems that are less susceptible to manipulation and misuse.”

Impact on the implementation of artificial intelligence in business

The discovery of the Skeleton Key vulnerability is crucial to the implementation of artificial intelligence in the business world. Many companies have quickly integrated AI into their operations to improve efficiency and customer satisfaction.

For example, major retailers have used AI to personalize product recommendations, optimize pricing strategies, and manage inventory. Financial institutions have deployed AI for fraud detection, credit scoring, and investment advice. The potential compromise of these systems could have far-reaching consequences for business operations and customer trust.

This security concern may temporarily slow AI adoption as companies re-evaluate their AI security protocols. Companies may need to invest more in AI security measures and conduct thorough audits of their existing AI systems to ensure they are not vulnerable to such attacks.

This finding underscores the need for continued vigilance and adaptation in the face of evolving AI capabilities. As AI becomes more deeply embedded in commerce, quickly identifying and mitigating security threats will be critical to maintaining the integrity of digital business operations.

For consumers, this change serves as a reminder to exercise caution when interacting with AI-powered systems, especially when sharing sensitive information or making financial decisions based on AI recommendations.

As the AI ​​landscape evolves, enterprises will be challenged to leverage the potential of AI while maintaining robust security measures. The Skeleton Key vulnerability highlights the delicate balance between innovation and security in the rapidly evolving world of AI-driven commerce.