How Should AI Be Regulated To Ensure Cybersecurity Safeguards?
We’re living in an age where an algorithm could either help us buy the perfect gift for a loved one or potentially drain our bank account in a fraction of a second. As AI’s capabilities expand and increase, its impact on cybersecurity—both positive and negative—becomes more pressing, necessitating an urgent conversation about regulatory frameworks.
With this in mind, let’s dive into the complex relationship between AI and cybersecurity and explores how judicious regulation can turn the tides in our favor. How should AI be regulated? Should it be regulated at all? And what restrictions are countries around the world putting in place?
Should AI Be Regulated?
While AI is nothing new, its capabilities and popularity have rapidly expanded in the last few years. OpenAI’s ChatGPT, a natural language processing chatbot, has already massively disrupted workplaces, with the chatbot helping people write code, emails, and content. It set the record for the fastest user growth in January, reaching over 100 million active users just two months after launch. And its competitors, Microsoft Bing AI, Google Bard, Chatsonic, and others, and similarly gaining traction.
With this surge in popularity has come new conversations about the role of AI and whether we need to put on the breaks or quickly establish some rules and regulations surrounding it. And these concerns aren’t just coming from tech-skeptics. Google President of Global Affairs Kent Walker said that AI “is too important not to regulate.” And OpenAI’s CEO Sam Altman said, “I try to be upfront… Am I doing something good? Or really bad?”. In other words, the very people developing these tools are also keenly aware of the damage they can cause if left unchecked.
So far, much of the discourse surrounding AI regulation has focused on the following areas:
- AI Art and Copyright: AI can create artwork similar to human works, potentially infringing on copyrights. There’s also debate over who owns the copyright of AI-generated art.
- Natural Language Models: Advanced AI models can produce text that’s hard to distinguish from human-written content, leading to worries about disinformation, privacy invasion, and economic impacts.
- Academic Integrity: AI could be used to write essays or dissertations, challenging academic integrity and making plagiarism detection difficult.
- Ethics and Bias: AI can inadvertently amplify societal biases, which calls for regulations ensuring fairness.
- Privacy and Surveillance: Concerns about AI’s potential in violating privacy and enabling mass surveillance.
- Autonomous Decision Making: In areas like autonomous vehicles or weaponry, regulation is needed to ensure safety and accountability.
However, as our reliance on AI grows, more specific concerns are coming to light – like cybersecurity. With AI, hackers can craft human-like text, generate phishing emails, and automate the creation of malicious content. For example, an AI model trained on known vulnerabilities can generate new malware, making it a potent weapon in the hands of cybercriminals. And we’re already seeing this happen – AI cyber-attacks are here.
The ways in which cybercriminals can leverage AI for nefarious gains are as expansive as they are severe. Here are some of the ways cybercriminals can use AI to enhance the efficiency and effectiveness of their attacks:
- Automated Hacking: AI can be programmed to identify system vulnerabilities and exploit them much faster than a human hacker could. They can perform brute force attacks more efficiently, constantly altering their approach until they find a successful pathway.
- Spear Phishing: AI can gather and analyze vast amounts of personal data from social media and other online sources to create highly personalized phishing messages, making them more believable and increasing the likelihood of success.
- AI-Generated Deepfakes: AI can create realistic fake audio and video, known as deepfakes, that can be used for disinformation campaigns or to impersonate individuals for fraudulent purposes.
- Malware: AI can be used to create more sophisticated malware that can adapt and learn from the security measures it encounters, making it harder to detect and neutralize.
- Evasion: Advanced AI systems can learn to evade detection systems, making attacks harder to identify and respond to. They can also mimic normal user behavior, making their malicious activities blend in with regular network traffic.
- DDoS Attacks: AI can enhance Distributed Denial of Service (DDoS) attacks by learning to identify network weaknesses and optimizing the attack strategy.
For many, cybercriminals’ potential misuse of AI underscores the need for robust cybersecurity measures, including tighter regulation.
The Case Against AI Regulation
Despite the dangers of unregulated AI, some people prefer no or very little regulation. Put simply, opponents of AI regulation argue that it could stifle innovation and progress. Regulations are often slow to adapt and may fail to keep pace with the rapid evolution of AI technologies. Strict regulatory oversight could also create high barriers to entry, favoring established companies and hindering start-ups and smaller businesses.
Furthermore, overly prescriptive rules could limit AI’s creative and beneficial applications. Critics also note the global nature of AI development; if strict regulations are imposed in one country, research and development might shift to less regulated regions. Lastly, they argue that existing laws covering areas like copyright, defamation, and data protection are often sufficient to manage AI’s current level of sophistication and that we should address future concerns reactively as AI capabilities continue to advance.
A Wild West AI Landscape
While some people would prefer a more wild-west style AI landscape, those people are largely absent from the cybersecurity community. As we touched on, the potential misuse of AI for cybercrime is too great. In an increasingly severe threat landscape, cybersecurity professionals need all the help we can get.
And this is why we see reputable cybersecurity calling for tighter regulations or working independently to develop safer practices around AI. For example, NIST recently released a risk-management framework to combat malicious AI.
Cybersecurity Professionals Aren’t Anti-AI
Before we dive into some specifics around how we should regulate AI in cybersecurity, it’s important to understand the critical role AI plays in cybersecurity.
When cyber professionals call for more regulation, they’re not calling for AI bans – AI is a potent tool for cybersecurity. For example, experts increasingly believe that AI is key to ensuring IoT security in the digital age. Similarly, AI is making identity authentication safer and more robust, preventing unauthorized access to sensitive data and systems.
And the list goes on. Cybersecurity teams leverage AI to detect malware., recognize phishing attempts, automate threat hunting, predict attacks, mitigate DDoS attacks, and speed up incident response.
How Should AI Be Regulated? A Cybersecurity Perspective
In the next section, we’re going to dive into AI regulation around the world. That can tell us a lot about how governments think about AI and its continued role in our societies. However, these regulations are coming from a holistic perspective – they’re answering the question, “How should AI be regulated?” and not “How can we regulate AI to bolster cybersecurity.” Of course, a well-regulated AI landscape should also positively impact cybersecurity, but it’s not necessarily the first priority in making legislation.
With that in mind, here are some recommendations on how we could regulate AI to improve cybersecurity and safeguard our systems.
Legislation
First, we need to establish clear-cut legislation that determines what constitutes appropriate AI usage in cybersecurity. Governments should work alongside international organizations, AI experts, and industry stakeholders to create and adopt AI ethical guidelines. The legislation should articulate the rights, responsibilities, and liabilities of AI users and manufacturers. For instance, in case of a security breach due to faulty AI, who should be held accountable? The user, the manufacturer, or both?
Certification and Standards
Regulatory efforts should include establishing certification processes and standards for AI systems. These standards should guide the design, development, deployment, and maintenance of AI in cybersecurity. They should cover aspects such as data privacy, transparency, accountability, and robustness of the AI system. Organizations such as ISO and IEC can play a vital role in developing these standards.
- ISO 27001, the international standard for Information Security Management Systems, can be updated to incorporate AI-related cybersecurity risks.
- IEC 62443, the series of standards for Industrial Communication Networks, can incorporate guidelines for AI usage in industrial cybersecurity.
Privacy Laws
One key aspect of AI regulation is data privacy. Data fuels AI and an enormous amount of data is often needed to train effective AI models. Consequently, data privacy laws should be revised and strengthened to ensure they fit the AI era. These laws should dictate what data can be used, how it can be used, and for how long.
AI Transparency and Explainability
A significant issue with AI is the ‘black box’ problem – the lack of transparency about how AI makes its decisions. Regulation should necessitate AI systems to have some degree of explainability. This transparency can help cybersecurity professionals better understand and trust the AI’s decisions, particularly in detecting potential threats.
Public-Private Partnership
The public and private sectors should collaborate to combat cybersecurity threats effectively. Governments should incentivize private companies to invest in AI-driven cybersecurity measures. Similarly, private firms should aid governments by sharing their technical expertise and insights on the latest threats.
Education and Awareness
To create an AI-literate society, education and awareness about AI and its implications for cybersecurity are crucial. Governments should integrate AI and cybersecurity topics into educational curriculums. Businesses should also run regular training and awareness programs for their staff.
Mandatory Disclosure of AI Breaches
Governments could require that businesses disclose any data breaches within a specific timeframe. This transparency would keep organizations accountable and help identify and address potential flaws in AI security measures.
Independent Auditing
Regular third-party audits of AI systems could be a prerequisite for their use in cybersecurity. These audits would provide an external perspective on the organization’s AI usage, ensuring that it aligns with regulatory and ethical standards.
Global Cooperation
Given the borderless nature of the internet and cyber threats, international cooperation is essential for AI regulation. We can establish global forums to share best practices, discuss emerging threats, and propose collective responses. Cybersecurity threats are global and should be the response.
Regulating AI Supply Chain
Given that AI systems are often composed of various components sourced from different vendors, there should be regulations to ensure the security of the entire AI supply chain. Standards for the components, vendors’ security practices, and transparency about the origin of the components could be part of these regulations.
User Consent and Control
Regulations could give users more control over how AI uses their data, requiring explicit consent for data collection and usage. This user-centric approach can help create a balance between leveraging AI for cybersecurity and respecting individual privacy rights.
Responsible AI Development
Regulations should promote the development of AI systems with a built-in “safety-first” approach. This includes mechanisms to prevent unauthorized access, detect anomalous behavior, and limit the AI’s actions if it deviates from expected behavior.
AI Regulation Around the World
We’ve seen a recent surge in discussions around AI regulation worldwide. For example, Japanese Prime Minister Fumio Kishida headed into the recent G7 meeting signaling his desire to launch the Hiroshima AI Process – a coordinated approach to AI governance, especially generative AI, like ChatGPT.
The EU, the US, China, and other countries have already been developing their approaches to AI regulation, which often take different forms.
For example, one key decision policymakers have to make is choosing between a “horizontal” or a “vertical” method. A horizontal strategy entails crafting a single, all-encompassing regulation to address the multitude of impacts posed by AI. Conversely, a vertical strategy tailors specific regulations to manage distinct applications or varieties of AI.
We already see some differences here. For example, while neither the European Union nor China has chosen a strictly horizontal or vertical path for their AI governance, they do show preferences. The EU’s AI Act leans horizontally, aiming to create a broad and comprehensive regulatory framework. In contrast, China’s algorithm regulations tend to take a vertical stance, focusing on custom rules for specific AI applications.
EU AI Regulation
The AI Act, a landmark legislation in Europe, sets out to regulate artificial intelligence (AI) based on its potential harm. It received the green light from leading parliamentary committees of the European Parliament on May 11, 2023, preparing it for final approval in mid-June.
The Act prohibits specific AI applications like manipulative techniques and social scoring. And following the insistence of left-to-center MEPs, the ban was extended to include AI models for biometric categorization, predictive policing, and the harvesting of facial images for database creation. Additionally, emotion recognition software is now outlawed in law enforcement, border management, workplaces, and education.
Biometric identification systems, initially permitted under specific circumstances such as kidnapping or terrorist attacks, became a contentious point. Despite resistance from the conservative European People’s Party, Parliament ultimately passed a complete ban.
The original AI Act did not address AI systems without specific purposes. However, the rapid success of large language models, like ChatGPT, necessitated a rethink on how to regulate this kind of AI, resulting in a tiered approach. The Act does not cover General Purpose AI (GPAI) systems by default. Instead, it imposes most obligations on operators that incorporate these systems into high-risk applications.
The Act introduces stricter rules for high-risk AI applications. An AI system is considered high-risk if it significantly threatens people’s health, safety, or fundamental rights.
Critically, the EU AI regulation could see significant players in the AI game, like OpenAI, leaving the EU altogether. OpenAI’s CEO said, “The current draft of the EU AI Act would be over-regulating.”
US AI Regulation
While not as far along in the AI regulation journey as the EU, the US is taking deliberate steps toward regulation. The White House released a Blueprint for an AI Bill of Rights on October 4, 2022, establishing key principles for the design and use of AI. These guidelines include protections such as shielding individuals from algorithmic discrimination and enabling people to opt out of automated systems. The Blueprint builds on the Biden-Harris Administration’s mission to regulate big tech, protect American civil rights, and make technology work in favor of its people.
The Blueprint lays out five core protections for Americans:
- Safe and Effective Systems: Protection from unsafe or ineffective AI systems.
- Algorithmic Discrimination Protections: No individual should face discrimination from algorithms. Systems should be designed and utilized equitably.
- Data Privacy: Protection from abusive data practices with built-in safeguards. Individuals should have control over how their data is used.
- Notice and Explanation: Individuals should be made aware when an automated system is in use and understand how and why it impacts them.
- Alternative Options: Individuals should be able to opt out of automated systems when appropriate and have access to a person who can address and rectify any issues encountered.
In response to the bill, several federal agencies are drafting new rules. For example, The Federal Trade Commission (FTC) is preparing rules to restrict commercial surveillance, algorithmic discrimination, and negligent data security practices. And the Department of Labor is also protecting workers’ rights by enforcing surveillance reporting requirements.
More recently (May 4, 2023), Biden summoned CEOs of Google and Microsoft to the White House to discuss AI. It’s not yet clear what resulted from this meeting, but presumably, The White House wants to know what these companies are doing to manage the dangers surrounding AI.
China AI Regulation
China’s AI regulations, while on paper, seem more expansive than other nations, are pretty vague. This is actually by design. China’s central government tends to publish vague outlines so that local governments have a high-level view of what the central government wants but still have room to experiment. At the same time, it allows government regulators to flexibly control technology companies as needed.
But what do the regulations say? For AI-based recommendation algorithms, the regulation addressed their use in disseminating information, pricing, and worker deployment. It mandated that providers “vigorously disseminate positive energy” and avoid price discrimination or overworking delivery drivers. The second regulation, addressing deep synthesis algorithms (which generate new content like deepfakes), requires the providers to get consent from individuals if their images or voices are manipulated.
UK AI Regulation
Following its exit from the EU, the UK is now responsible for managing its own AI regulations and is somewhat behind the other nations on this list. No specific AI regulations are in place yet, but there are moves toward regulation.
For example, the Financial Conduct Authority (FCA) is currently consulting with several legal and academic institutions, including the Alan Turing Institute, to enhance its understanding of AI technology and its implications. And to investigate the impacts of AI, the UK’s competition regulator announced in May that it would initiate a comprehensive examination of the technology’s effects on consumers, businesses, and the overall economy.
Interestingly, the UK has decided not to establish a new, centralized body governing AI. Instead, in a statement made in March, the UK government expressed its plans to divide the responsibility among its existing regulators. The regulators for human rights, health and safety, and competition will each have a role in overseeing AI within their respective spheres. This approach is presumably to leverage the specialized knowledge and experience these regulators already have in their fields while adding the new responsibility of managing AI’s impact.
Final Thoughts
Here’s the bottom line. While fostering innovation in AI is essential, regulation is vital to ensuring robust cybersecurity safeguards. As AI technology continues to evolve, so does the threat landscape, with an escalating number of AI-based cyberattacks causing notable concern. This trend suggests that our systems will become increasingly susceptible to advanced AI-driven threats. The future of a secure digital world will largely depend on our ability to govern AI effectively and responsibly today. Let’s rise to the challenge and ensure we build a safe and secure cyber ecosystem for all.
Try Portnox Cloud for Free Today
Gain access to all of Portnox's powerful zero trust access control free capabilities for 30 days!