NIST Releases New AI Risk Framework to Combat Emerging Threats from Malicious AI

zero trust data access portnox

For most of history, our species has found creative ways to use technology for both bad and good. For example, we can harness nuclear energy to produce vast amounts of clean energy, helping to reduce our reliance on fossil fuels. But we can also use nuclear power to create devastating weapons of mass destruction.

The same is true for many other technologies. Is the internet a way to unite people and revolutionize how we access information? Or is it a tool for cyberbullying, identity theft, and spreading misinformation? Well, it’s both.  

Now it’s AI’s turn to fall to the dark side. AI has the potential to transform industries, revolutionize the way we work, and improve our daily lives. And that’s precisely why it’s generated so much buzz in recent years. However, it’s also caught the attention of cybercriminals intent on using it to create AI malware, AI ransomware, and for a range of other deleterious purposes.

But how exactly are cybercriminals leveraging advanced AI tools like ChatGPT? And what are reputable industry bodies like NIST doing to stop them? Let’s get into it.  

ChatGPT & The State of Malicious AI Today 

Open AI’s ChatGPT has garnered much attention recently, with the tool reaching over one million users in just five days of its launch. But while most people are using the impressive AI for fun or to improve their workflow, cybercriminals are using it for more nefarious purposes, including:  

Phishing and spamming: Bad actors could use ChatGPT to generate convincing phishing emails or messages to lure victims into clicking on malicious links, downloading malware, or providing personal information. It can even help create convincing-sounding emails impersonating high-ranking individuals, like a CEO.  

Malware development: Cybercriminals could use ChatGPT to create more sophisticated malware that can evade detection by traditional security measures. In January 2023, Checkpoint outlined how fledgling and seasoned cybercriminals were using the chatbot to create infostealers and encryption tools.  

Scamming: ChatGPT could create convincing scams, such as investment or romance scams, that could trick victims into sending money or providing sensitive information. 

Automated attacks: Cybercriminals could use ChatGPT to automate brute-force attacks or password cracking, making it easier and faster to breach security systems. 

It’s important to note that OpenAI takes measures to prevent its technology from being used for malicious activities by working with law enforcement and security organizations and implementing ethical guidelines. So, for example, if you explicitly ask, it won’t write malicious code. Still, cybercriminals are finding ways around this. For example, some developers experimenting with ChatGPT found that if you detail the steps of writing the malware instead of giving a direct prompt, the AI will construct the malware for you.  

Perhaps the most dangerous thing about ChatGPT from a cybersecurity perspective is that it allows anyone to be a hacker. Before AI, there were several barriers to entry for becoming a hacker. For example, you would need technical skills like knowledge of computer programming and networking and access to specialized tools and resources, usually obtained on the dark web. But AI is helping bridge these gaps even for people with minimal hacking experience.  

The Rise of AI Malware, AI Ransomware, & Sophisticated Attacks 

While security-conscious companies and security researchers are busy finding new and increasingly advanced ways of safeguarding systems, cybercriminals are busy finding ways to bypass these advancements. It’s a constant game of cat and mouse. And the result? Increasingly sophisticated cyberattacks.  

Cybersecurity researchers have already found evidence of well-known cybercriminal gangs hiring pen testers to help break into company networks. The notorious ransomware gang Conti (who racked up a terrifying $182 million in ransomware payments in 2021) is one such group thought to be reinvesting its earnings into hiring experienced tech professionals.  

A natural next step for cybercriminals will be to hire ML and AL experts to create advanced malware campaigns. Cybercriminals may use AI to automate large portions of the ransomware creation process, allowing for accelerated and more frequent attacks. And then we have true AI malware and AI ransomware. This is where hackers create situationally aware malware that analyzes the target system’s defense mechanisms and quickly learns and mimics everyday system communications to evade detection. 

NIST’s New AI Risk Management Framework 

On January 26, 2023, The National Institute of Standards & Technology (NIST) issued Version 1.0 of its Artificial Intelligence Risk Management Framework to enable organizations to design and manage trustworthy and responsible AI. But what is this framework all about? 

The AI RMF divides into two parts. The first part frames the risks related to AI and outlines trustworthy AI system characteristics, while the second part describes four specific functions — govern, map, measure, and manage. These four functions are further divided into categories and subcategories and help organizations address AI system risks in practice. In addition, organizations can apply these functions in context-specific use cases and at any stage of the AI life cycle, making them versatile tools.  

Crucially, NIST’s AI Risk Management Framework focuses on changing how we think about AI. It outlines seven characteristics of trustworthy AI, including “Safe” and “Accountable & Transparent,” which are particularly relevant to AI’s use in cybercrime. The “Safe” section emphasizes the importance of designing AI systems that do not cause harm to humans, property, or the environment. Meanwhile, the “Accountable & Transparent” section requires that information and outputs from AI systems be available to all users. This helps prevent cybercriminals from manipulating the AI into providing responses that other users could not elicit. 

Final Thoughts 

The growing use of AI by cybercriminals has led to the emergence of new threats, such as AI ransomware and AI malware. These pose a significant risk to organizations and individuals alike. However, the new NIST AI Risk Management Framework provides a comprehensive approach to addressing these risks. By following its guidelines, organizations can mitigate the threats posed by malicious AI and ensure the development of trustworthy AI systems. As AI technology continues to evolve, organizations must take steps to protect themselves and stay up-to-date with the latest risk management strategies. 

 

Try Portnox Cloud for Free Today

Gain access to all of Portnox's powerful zero trust access control free capabilities for 30 days!