Recently, JFrog's security team discovered around 100 malicious AI ML models on the Hugging Face platform. These models have the capability to execute code on users' machines, creating a persistent backdoor for attackers. Despite Hugging Face's security measures, including malware, pickle, and secrets scanning, these malicious models were still able to slip through and pose serious risks of data breaches and espionage attacks.
Warning: Malicious AI Models Pose Security Threat
Pin It

Warning: Malicious AI Models Pose Security Threat

   

Recently, JFrog's security team discovered around 100 malicious AI ML models on the Hugging Face platform. These models have the capability to execute code on users' machines, creating a persistent backdoor for attackers. Despite Hugging Face's security measures, including malware, pickle, and secrets scanning, these malicious models were still able to slip through and pose serious risks of data breaches and espionage attacks.

  

JFrog developed an advanced scanning system to analyze PyTorch and Tensorflow Keras models on Hugging Face and found that a significant number of them contained malicious functionalities. These models were not false positives but housed actual harmful payloads, indicating a genuine threat to users. One specific case highlighted a PyTorch model uploaded by a user named "baller423" that contained a payload allowing it to establish a reverse shell to a specified host.

  

The malicious payload utilized Python's pickle module's "reduce" method to execute arbitrary code, evading detection by embedding the malicious code within the trusted serialization process. JFrog also found evidence of the same payload connecting to other IP addresses, suggesting that the operators might be AI researchers experimenting with risky behavior.

  

In response to these findings, JFrog set up a HoneyPot to analyze the activity and determine the operators' true intentions. While it was unclear whether the operators were hackers or researchers trying to bypass security measures for bug bounties, the risk posed by these malicious AI models is real and should not be underestimated.

  

The discovery of these malicious AI models highlights the security risks associated with AI ML models and the need for increased vigilance and proactive measures to protect users and the ecosystem from malicious actors. Stakeholders and technology developers need to address these security concerns with diligence to prevent further incidents.

Pin It

Copyright © 2022 - 2024 DigiTrends4u. All Rights Reserved.