close
close

Why AI is essential to securing software and data supply chains

Supply chain vulnerabilities occupy a prominent place in the cybersecurity landscape, with threats and attacks such as SolarWinds, 3CX, Log4Shell and now XZ Utils highlighting the potentially devastating impact of these security breaches. These latest examples of attacks on open source software (OSS) represent a growing attack vector. In fact, almost three quarters (74 per cent) of UK software supply chains have experienced cyberattacks in the last twelve months.

Expect attacks on the open source supply chain to accelerate, with attackers automating attacks on common open source projects and package managers. Many CISOs and DevSecOps teams are not prepared to implement controls into their existing build systems to mitigate these threats. In 2024, DevSecOps teams will move away from shift-left security models to “shift-down” by using AI to automate security outside of developer workflows.

Here are the factors contributing to the rise in software supply chain attacks, and the role of AI in helping developers work more efficiently while creating more secure code.

Supply Chain and OSS Attacks

Open source libraries and languages ​​underpin more than 90 percent of the world’s software. In a U.S. survey of nearly 300 IT and IT security professionals, 94 percent said their companies use open source software, and 57 percent use multiple open source platforms. Exactly half of respondents said the threat level is “high” or “extreme,” while another 41 percent consider it “moderate.” At the time of writing, details of a backdoor in the XZ library and several other OSS packages have just been released. The ubiquity of open source software around the world is one of the key factors driving the rise in supply chain attacks.

The Role of Data Governance and Data Supply Chains

Security professionals must also consider how vulnerabilities extend to their data supply chains. Although organizations typically integrate externally developed software through their software supply chains, their data supply chains often need clearer mechanisms for understanding or contextualizing data. Unlike structured systems or software functions, data is unstructured or semistructured and subject to a wide range of regulatory standards.

Many companies build AI or ML systems on massive pools of data from heterogeneous sources. ML models in model zoos are released with minimal understanding of the code and content used to create the models. Software engineers must treat these models and data with the same care as the code that goes into the software they build, paying attention to its provenance.

DevSecOps teams need to assess the liability associated with data usage, especially when building LLMs to train AI tools. This requires careful data management in models to prevent accidental disclosure of sensitive data to third parties such as OpenAI.

Organisations should adopt strict policies that define the permitted use of AI-generated code and, when deploying third-party AI platforms, conduct thorough due diligence to ensure their data is not used to train and tune AI/ML models.

Solution: Switch from “Shift-left” to “Shift-down”

The industry adopted the concept of shift left a decade ago to address security vulnerabilities early in the software lifecycle and streamline developer workflows. Defenders have long been at a disadvantage. AI has the potential to level the playing field. As DevSecOps teams navigate the intricacies of data management, they must also assess the impact of the evolving shift left paradigm on their organizations’ security postures.

Companies will begin to move beyond shift left to adopt AI to fully automate security processes and remove them from the developer workflow. This is called “shift down” because it moves security to automated and lower functions in the technology stack rather than burdening developers with complex and often difficult decisions.

GitLab Global DevSecOps: The State of AI in Software Development found that developers spend only 25 percent of their time generating code. AI can make them more efficient by optimizing the remaining 75 percent of their workload. This is one way to leverage the power of AI to solve specific technical problems and improve the efficiency and productivity of the entire software development lifecycle.

Expect 2024 to be the year in which the escalation of threats negatively impacting OSS ecosystems and global software supply chains will drive significant changes in cybersecurity strategies, including an increased reliance on AI to protect digital infrastructure. The cybersecurity landscape is already changing, with an increasing focus on mitigating supply chain vulnerabilities, enforcing data governance, and incorporating AI into security measures. This transformation promises to drive DevSecOps teams toward software development processes where efficiency and security are at the forefront.

Josh Lemos is the Chief Information Security Officer at GitLab.