Sensors and Analytics in Industrial Automation

One of the cornerstones of Industry 5.0...

This Year IIT Guwahati Takes The Beacon of IISF Mega Event

The Indian Institute of Technology Guwahati will...

Driving Sustainability in India’s Semiconductor Manufacturing

Organizations like the Electronics Sector Skills Council...

Trending

Tackling the Security of AI Engines and Software Development in 2024

As AI continues to grow in scale, and large language models (LLMs) are commoditized, developers are more often being tasked with packaging AI and ML models alongside their software updates or net-new software.

J-Frog on AI-ML Models in 2024 the volt post
Yoav Landman, Founder and CTO of JFrog

While AI/ML shows great promise for innovation, it simultaneously heightens concerns as many developers do not have the bandwidth to manage their development securely.

Lapses in security can unintentionally introduce malicious code into AI and ML models – opening the door for threat actors to lure developers to consume OSS model variants, infiltrate corporate networks and inflict further damage on an organization.

What’s more, developers are increasingly turning to generative AI to create code without knowing if the code they generate is compromised. This again can perpetuate security threats. Code must be vetted properly from the start to proactively mitigate the threat of damage to the software supply chain.

These threats will continue to plague security teams as threat actors seek out ways to exploit AI and ML models at every turn.

As security threats rise in number and scale, 2024 will require developers to embrace security in their job functions and deploy the necessary safeguards to ensure resiliency for the organization.

Evolving the Role of Developers

Considering security at the start of the software lifecycle is a relatively new practice for developers. Oftentimes, security at the binary level is perceived as a “nice to have”. Threat actors are counting on this oversight, as they look for avenues to weaponize ML models against the organization and look for ways to inject nefarious logic into the end binary.

Likewise, many developers do not have the necessary training to embed security into their code during the beginning stages of development.

The main impact of this is that code generated by AI, trained on open-source repositories is often not properly vetted for vulnerabilities and lacks holistic security controls to protect users and their organizations from exploitation. Though it might save time and other resources in their job function, developers are unwittingly exposing the organization to numerous risks. Once that code is implemented in AI and ML models those exploitations are only made more impactful and can then go undetected.

With the rampant use of AI, the traditional developer role is no longer enough to address the evolving security landscape.

As we move into 2024, developers must also become security professionals, solidifying the idea that DevOps and DevSecOps can no longer be considered separate job functions. By building in secure solutions from the start, developers can not only ensure peak efficiency for critical workflows, but instil confidence in the security of the organization.

“Shift left” to install safeguards from the start

The security of ML models must continue to evolve if security teams are to remain vigilant against threats in the new year. However, as AI gets implemented at scale, teams can’t afford to identify the necessary security measures later in the software lifecycle ­­– by then, it might be too late.

Security leaders across the organization must embody the “shift left” mentality for software development. Adhering to this approach can ensure all components of the software development lifecycle are secure from the start and improves an organization’s security posture overall. When applied to AI and ML models, shift-left not only confirms if code developed in external AI/ML systems is secure, it also ensures the AI/ML models being developed are free of malicious code and are licence-compliant.

Threats around AI and ML models will continue to persist as we look to 2024 and beyond. Ensuring security is baked in from the start of the software lifecycle will be paramount if teams are to consistently thwart attacks from threat actors and protect the organization and its customers.

Don't Miss