
AI Model Theft: A Rising Threat to Innovation and Security | Image Source: www.pymnts.com
RALEIGH, N.C., 20 December 2024 – The researchers revealed a new method of extracting artificial intelligence (AI) models using computer electromagnetic signals, raising important concerns for the safety of patented AI systems. According to PYNTS, the technology reaches precision rates above 99%, which could threaten commercial artificial intelligence companies that rely largely on the protection of their intellectual property.
Innovative discovery of AI vulnerabilities
Researchers at North Carolina State University have shown that by placing a probe near a Google Edge Tensor Treatment Unit (TPU), they can capture electromagnetic signals to reconstruct critical aspects of an AI model. The process does not require direct access to the system, which broadens the scope of security risks. The precision of the method, up to 99.91%, underscores the sophistication of this emerging threat.
” AI models are valuable; We don’t want people to steal them,” said Aydin Aysu, co-author of the study and associate professor of electrical and computer engineering at North Carolina State University. In a blog, Aysu highlighted the two risks of AI model theft: the economic cost of re-creation of the stolen model and the vulnerabilities of the exposure. “Building a model is expensive and requires significant IT resources,” he added, ”but when a model is filtered, it becomes more vulnerable to new attacks.”
Implications for the commercial development of AI
Model flying potential creates cascading challenges for companies like OpenAI, Anthropic and Google, which have invested millions in the development of advanced AI systems. According to PYMNTS, Lars Nyman, Chief Marketing Officer of CUDO Compute, warned against wider ramifications, saying: “This is the potential cascade damage, that is, competitors pulling years of R clamps; D, regulators investigating the sensitive mismanagement of intellectual property, and clients demand that suddenly their ” AI unit is not so unique”
Theft of IA models could lead to an inversion of engineering, allowing competitors to avoid innovation costs and jump into their own development. In addition, committed models could erode consumer confidence, especially if stolen technology is used maliciously or to reproduce features that were previously exclusive to original creators.
New threats in AI ecosystems
In addition to electromagnetic signal attacks, other safety issues are increasing. PYNTS reported the increase in malicious files directed against Hugging Face, an important AI repository used in industries such as retail, logistics and finance. These vulnerabilities expose companies to risks ranging from intellectual property theft to operational disruptions.
National security experts also highlighted the weakness of security measures as a critical deficiency, citing cases such as the violation of OpenAI as a precautionary measure. Theft of patented systems not only undermines companies, but could also endanger national security if these technologies fall into the wrong hands.
Potential responses and mitigations
The vulnerabilities highlighted in the study led to discussions on the need to improve safeguards for artificial intelligence systems. Technology consultant Suriel Arellano told PYNTS that companies may need to reconsider their approach to the deployment of AI. “Companies could move towards a more centralized and secure calculation or consider fewer alternative technologies that could fly,” he said. However, Arellano suggested that most organizations would probably invest in improved security measures rather than abandoning existing frameworks.
Possible solutions include standardized security audits, such as SOC 2 or ISO certifications, which could help different, less reliable, secure AI providers. Improvement of encryption techniques and material electromagnetic shielding could also be a viable defence against the specific threat of signal-based attacks.
The role of AI in strengthening security
While CEW systems are becoming increasingly targets, they are also critical to strengthening cyber security. According to PYMNTS, AI-led tools can automate threat detection, simplify response to incidents, and recognize models to anticipate possible violations. Timothy E. Bates, Lenovo CTO, noted that machine learning systems not only detect emerging threats, but also adapt to respond more effectively to each encounter.
This dual role of the AI – as a potential vulnerability and solution – highlights the crucial need for balanced approaches. Investment in advanced cyber security measures can mitigate risks while leveraging AI’s capabilities to address changing threats.
As the marketing of the CEW continues to grow, bets to secure these systems are higher than ever. Companies must address both the immediate vulnerabilities and the broader consequences of model theft to maintain their competitive advantage and protect consumer confidence. The findings of the North Carolina State University recall the changing security landscape AI and the urgent need for robust defences.