close
close

Solondais

Where news breaks first, every time

sinolod

Business Reporter – Technology – Meeting the challenges of AI in software development

JFrog’s Guy Levi calls for action for software development teams to comply with EU AI law

In today’s rapidly evolving software development landscape, artificial intelligence (AI) and machine learning (ML) bring both opportunities and risks. Organizations around the world are seeing a surge in targeted attacks targeting software developers, data scientists, and the infrastructure supporting the deployment of secure AI-enabled software supply chains.

Reports of attacks on development languages, infrastructure, manipulation of AI engines to expose sensitive data, and threats to the overall integrity of software are becoming more widespread.

In this environment, organizations must defend against AI software supply chain risks in three areas: regulatory, quality, and security.

1. Regulation

Although the EU AI law has received much attention, it is only part of a broader and more complex global regulatory environment.

With countless laws such as the EU GDPR and the European Cyber ​​Resilience Act already impacting AI on a global scale, organizations need to go beyond just considering the framework of the EU as the reference. Instead, they should focus on a more holistic, risk-based approach that takes into account a more diverse range of global regulations.

This strategy ensures compliance with relevant laws, but also addresses the nuanced needs of different markets, recognizing that regulations such as the EU AI Law, while influential, are not without flaws – they are both too protective in some areas, for example its broad restrictions on AI. use of facial recognition technology, and insufficiently protective in others, like GenAI, where there are concerns that it does not go far enough in regulating the potential misuse of deepfakes and video-generated content. AI.

As AI and ML introduce a new attack surface, organizations need to prepare for these regulatory changes today in order to be prepared when they come into effect between 2025 and 2027. It is not uncommon that even the most established companies leverage decades-old local infrastructure, built by developers. using various programming languages ​​and principles.

This brings complexity to businesses looking to modernize their infrastructure while complying with emerging regulations. Today, businesses are moving cautiously because they want to scale in the right way to avoid unforeseen operational disruptions and rising IT operating costs.

2. Quality

Navigating the complexities of software development is inherently difficult, and integrating AI further complicates the landscape. Obtaining deterministic results from statistical models, at the heart of AI and ML, faces many difficulties. Because AI relies on large data sets, developers must deal with the intricacies of statistical variability, from data drift to bias.

The risk of chaotic and unreliable results requires rigorous data organization and management practices. Developers must take a meticulous approach to ensure that inputs to AI models are clear, consistent and representative. Quality assurance in AI-centric software development is not just a technical challenge; this requires a cultural shift towards prioritizing excellence at every phase of the development lifecycle.

3. Security

While AI improves software and functional capabilities, it also introduces new vulnerabilities that malicious actors can exploit. Python, the language of choice for many AI developers due to its accessible syntax and robust libraries for data visualization and analysis, exemplifies this double-edged sword. Although its foundations support the advanced AI software ecosystem, its widespread use poses critical security risks, particularly regarding malicious ML models.

As regulations such as the EU AI Act impose higher standards, addressing these vulnerabilities becomes a necessary step to safeguard AI’s potential.

Recent findings from the JFrog security research team illustrate the severity of these threats: an accidental GitHub token leak, if misused, could have allowed malicious access to important repositories, including the Python Package Index (PyPI) and the Python Software Foundation (PSF). Malicious models could have leveraged the model object format used in Python to execute malicious code on the user’s machine without their knowledge.

If the worst had happened, this vulnerability would have threatened the integrity of critical systems across banking, government, cloud and e-commerce platforms.

The potential consequences of such vulnerabilities highlight the urgent need for enhanced security measures within the AI ​​software supply chain. Businesses must prioritize defensive strategies to guard against these emerging threats, as the consequences of inaction could jeopardize not only their operations but also the entire digital ecosystem.

Increasing complexity

As the complexity of AI and software development increases, so do the associated risks. By taking a proactive approach across the pillars of regulation, quality and security, organizations can strengthen their defenses against the evolving threat landscape.

Now is the time to act: Ensuring compliance, execution excellence and strong security are not just a strategic advantage; it is essential to the survival of businesses in an increasingly interconnected world. With frameworks such as the EU AI Act setting new standards, aligning with these regulations becomes fundamental to staying ahead of the curve and mitigating future risks.


Guy Levi is Vice President and Principal Architect of the CTO Office at JFrog

Main image courtesy of iStockPhoto.com and kemalbas