close
close

Preparing for the New AI Frontier: Using AI Computational Security to Address Threats at the Edge

The increased integration of artificial intelligence (AI) into edge devices such as mobile phones and personal computers promises improved user experiences through real-time data processing, reduced latency, and enhanced privacy. However, these AI-enabled edge devices can introduce new cybersecurity risks and amplify existing risks. In response to this evolving cybersecurity landscape, this paper explores ways to leverage AI computational security to address threats at the edge.

The decentralized nature of AI-enabled edge devices has a dual impact on cybersecurity: it can strengthen defenses by reducing reliance on a single endpoint, but it can also increase risk by expanding the attack surface. For example, an AI-enabled laptop that processes and stores data locally minimizes reliance on cloud platforms, thereby reducing exposure to external servers. However, storing data locally on edge devices can increase the risk of physical manipulation and side-channel attacks. This means that conventional cybersecurity threats can be compounded by security threats related to AI, the cloud, Internet of Things (IoT)and edge computing, which creates a multi-faceted challenge.

AI compute security, which refers to the measures taken to protect the infrastructure, data, and integrity of AI systems on edge devices, is critical to expanding the AI ​​frontier by protecting against evolving cyber threats and maintaining the resilience of connected systems. As the number of AI-enabled edge devices continues to grow rapidly, leveraging AI compute security across the edge device layer, network layer, and AI compute layer becomes increasingly critical to facilitating further innovation.

1. Edge Device Layer

The edge device layer includes all physical devices that collect, process, and transmit data in an edge computing environment, such as IoT devices, sensors, and embedded AI chips in mobile phones and personal computers (PCs). AI models embedded in edge devices enable advanced capabilities such as facial recognition, voice assistants, and predictive text input. These devices enable real-time data collection and processing, reducing the need for constant cloud communications.

There are three main security concerns for this layer: physical security breaches, data breaches, and cyber takeovers. Physical security prevents direct access to the hardware where critical computational operations are taking place. Unauthorized physical access can lead to manipulations that bypass software security, allowing attackers to manipulate computational processes, directly change AI models, and extract sensitive data. Data breaches, another major threat, exploit vulnerabilities in the edge computing device and can lead to unauthorized access to sensitive data and privacy violations. Finally, cyber takeovers occur when malware is injected into a smart device or system, allowing an attacker to remotely control the device, misuse AI capabilities, and disrupt operations.

To counter these threats, we need to combine traditional cybersecurity measures with innovative AI-adapted solutions. For example, practicing good cyber hygiene, such as using password managers and having strong passwords, is also important because it can help detect and prevent unauthorized access. Additionally, devices must receive timely software updates and vulnerability patches to protect against attacks. Role-based access control is also important to prevent unauthorized access and takeover attempts, ensuring that only authorized users can interact with the device.

In addition, AI models on edge devices require additional protection to prevent misuse and ensure consistent application of security policies. Built-in filters and security policies are essential in this regard. Hypervisors and virtual machines (VMs) can create isolated environments for AI models, protecting them from potential threats by preventing direct integration with the operating system. This isolation can be thought of as placing applications and parts of the operating system in a “box,” making it easier to detect modifications and ensuring that only specific, pre-approved inputs and outputs are allowed. Additionally, providing an attested execution environment in which the bootloader and firmware can mathematically prove their expected configuration to the remote host is fundamental. This precise assessment ensures that devices are operating as intended and helps maintain solid security standards.

2. Network layer

The network layer connects edge devices to local servers and cloud data centers using 5G, Wi-Fi, and Ethernet, managing data transmission and protocol support. This layer ensures secure and efficient data flow, enabling seamless communication and coordination.

The three primary security concerns at this layer are man-in-the-middle (MiTM) attacks, data interception, and distributed denial-of-service (DDoS) attacks. MiTM attacks occur when an attacker intercepts and alters the communication between an AI-enabled mobile phone and its network. If successful, they can steal sensitive data or inject malicious content, thereby compromising the AI ​​models and their results. Another major threat at this layer is data interception, where unauthorized access to data transmitted over the network exposes sensitive information from computers. Finally, DDoS attacks can disrupt the operation of AI applications on smart devices, leading to delays, inaccuracies, or complete failures in AI operations.

To combat these threats, it is crucial to use standard encryption techniques such as Advanced Encryption Standard-256 and end-to-end encryption to protect data both at rest and in motion. Additionally, innovative AI-specific security solutions such as homomorphic encryption enable computations on encrypted data. This type of solution ensures that sensitive information remains private and secure even when the data is processed at the edge. Other AI-based network security solutions such as behavioral analysis and anomaly detection also provide additional layers of protection by continuously monitoring network activity and detecting suspicious behavior.

In addition, leveraging traditional cybersecurity measures, such as deploying intrusion detection and intrusion prevention systems, can help identify and respond to suspicious activity in real time. These systems analyze network traffic for anomalies, providing early warnings and enabling rapid mitigation. Software-defined networking is also important because it centralizes network control to enable rapid adjustments in traffic management. This capability is key to isolating and redirecting data flows to protect against MiTM attacks, preventing data interception, and efficiently redistributing resources to defend against potential DDoS attacks. Finally, network segmentation also helps mitigate the impact of DDoS attacks by isolating critical systems and limiting the spread of traffic spikes.

3. AI computer layer

The AI ​​compute layer includes edge servers and AI-capable nodes such as micro-data centers, AI models, inference engines, and AI applications. This layer is responsible for local AI model training, inference, real-time data analysis, and decision-making.

At this layer, the three main security concerns are unauthorized access, model poisoning, and adversarial attacks. Unauthorized access occurs when an attacker gains control of an AI compute node on devices such as mobile phones, potentially exposing sensitive AI models and data. This breach can lead to misuse or manipulation of AI capabilities, including changing model parameters, stealing intellectual property, or gaining insight into proprietary algorithms. To address this risk, it is necessary to enforce secure access and control policies that prevent unrecognized devices from connecting to the network, and ensure that user identity is verified through trusted devices and robust authentication mechanisms such as multi-factor authentication.

Another security threat at this layer is model poisoning, which involves malicious data corrupting the accuracy of an AI model during training, leading to flawed or harmful results in AI applications. Rigorous validation and cleansing of training data are essential to protect against this threat and maintain the integrity and accuracy of AI models. Additionally, embedding context in machine learning models increases their security by accurately interpreting input data and preventing manipulation, making them more robust and better equipped to handle various input manipulations.

Finally, adversarial attacks pose a critical risk by manipulating input data to trick the AI ​​model’s reasoning process, resulting in incorrect results in mobile device applications. These attacks can cause AI systems to make erroneous decisions, potentially causing serious consequences. Adversarial training offers an effective countermeasure by training AI models with both normal and adversarial examples, improving their ability to recognize and resist manipulated input data.

Delivering the Future of AI at the Edge

The continued integration of AI compute security into edge devices represents a significant shift in AI and cybersecurity development. To secure the future of AI-enabled edge devices, we must protect today’s devices and networks with AI compute security that encompasses both traditional cybersecurity measures and innovative AI-specific solutions. Policymakers can contribute to this effort by supporting security-by-design and security-by-default principles to encourage the development of resilient AI ecosystems. Additionally, policymakers should embrace the goal of zero trust when developing new AI governance solutions and establishing strategic partnerships. Finally, they must continue to invest in collaborative research initiatives between government, industry, and academic stakeholders focused on developing a comprehensive AI compute security framework.

Future-proofing AI, both its responsible development and safe deployment, is an ongoing process that requires flexibility and continuous adaptation to its evolving capabilities and threats. By understanding the dual impact that AI at the edge can have on cybersecurity, as well as the interconnected nature of evolving security threats, we will be better prepared to leverage the benefits of the next AI frontier.