Blog

Generative AI-Safety Concerns and Global Regulations

Generative AI: Safety Concerns and Global Regulations

When ChatGPT launched in November 2022, generative AI became one of the hottest global topics. But ChatGPT is just the tip of the iceberg — generative AI has seemingly limitless capabilities, across all sectors. OpenAI CEO, Sam Altman, is enthusiastic about the potential of this type of AI and is forging ahead at full speed. But several current and former staff members have criticized the company’s “reckless” culture and its dismissive attitude about AI safety. 

Two senior developers recently quit working at OpenAI for this reason. 

So let’s dig in — what is generative AI? What are the security risks it poses and what are governments around the world doing to regulate it?

Understanding Generative AI and Its Risks

Generative AI is a branch of Limited Memory AI, technology that can “remember” past events and results and learn from these “memories” and achieve a desired outcome. 

ChatGPT, Siri, Alexa, autonomous vehicles, and many other systems use generative AI to make memory-based decisions in real-time. That is how ChatGPT knows the next words to write, how Siri and Alexa respond appropriately to prompts and requests, and how self-driving cars know the correct maneuver when on the road.

While generative AI has incredible capabilities, it is not without risks. As with all technology, it can be used for good or nefarious purposes. 

Cybercriminals have been taking advantage of generative AI capabilities and weaknesses to perpetrate cyberattacks and data breaches, such as:

  • Phishing and malware: ChatGPT enables cybercriminals to create authentic malware codes or phishing emails to gain people’s financial information. 
  • Adversarial attacks: Cybercriminals can attack driverless cars through adversarial attacks, which result in faulty outputs and can cause accidents. Cybercriminals can also break the code of autonomous vehicles and gain access to a broad network of IoT (internet of things) devices.
  • Data leaks: When users input prompts into a generative AI system, that information is public and can be stolen by cybercriminals. For example, a company using ChatGPT might input sensitive information in the hopes of generating quick output, but that information is not secure. 
  • False/inaccurate information: Cybercriminals can use generative AI to create deepfakes or spread false information for criminal and unethical purposes. 

AI Regulation on an International Scale 

Regulation is a lot easier said than done. Governments face several challenges:

  1. The technology behind generative AI is complex and sophisticated, beyond the understanding of laymen, and frankly, many politicians. Policymakers must first understand the technology before they can regulate it.
  2. AI advances are happening incredibly quickly. Regulations must be both comprehensive and scalable so that they can keep up with advances. Otherwise, regulations for the current technology might become obsolete in a year or two as new AI advances emerge.
  3. AI development is largely the domain of private companies, which have a historically complex relationship with regulators. (In 2023, Sam Altman publicly asked congress to regulate AI, but recently he has been accused of recklessness. While the need for regulation is apparent to many AI leaders, compliance is not so straightforward.)

Despite the challenges, governments around the world understand that AI regulation is economically and ethically crucial. 

Earlier this year, all 27 EU Member States and the European Parliament endorsed the EU AI Act, considered the most comprehensive regulation code to date. It includes codes of practice, rules for general purpose AI systems along with unacceptable risks, transparency requirements, serious fines for non-compliance, and more.

The United States is considered to be lagging behind Europe. In 2019, the government established the Senate Task Force on AI, which passed about 15 AI-related laws. However, many states have also begun rolling out laws concerning AI development, job layoffs, and more — but all of this is a far cry away from a comprehensive framework. 

As the US presidential elections draw near, candidates are busy with many pressing topics — immigration, healthcare, taxes, international wars, and more — which will likely leave AI on the back burner. Perhaps, when elections are over, the elected administration will make further headway in AI regulation.