Google DeepMind Forms New Organization Focused on AI Safety

The launch of Google’s GenAI models, such as Gemini, has sparked public and regulatory concerns about their potential misuse. GenAI’s ability to generate realistic text, mimic writing styles, and lie poses serious risks. It can spread disinformation, deepfakes, and other misleading content.

This is dangerous because it can manipulate people and erode trust. These concerns have prompted policymakers to increase scrutiny of AI. In the United States, Senators Mark Warner and Marco Rubio introduced legislation mandating safety audits for AI models.

In the European Union, the European Commission drafts regulations for AI across sectors. These efforts address ethical concerns and ensure responsible AI deployment. Both regions aim for transparent and accountable AI practices.

Google’s Efforts to Address AI Safety

The formation of AI Safety and Alignment is an important step by Google to address these concerns. The organization will focus on developing concrete safeguards into current and developing GenAI models, and researching long-term solutions to the risks associated with AGI.

Challenges Faced

  • High public skepticism of GenAI: A YouGov survey shows that 85% of Americans are worried about the spread of deepfakes.
  • Corporate concerns about GenAI: A Cnvrg.io survey shows that 25% of companies have compliance, privacy, and reliability concerns about GenAI.
  • Difficulty addressing GenAI’s tendency to hallucinate: This can cause the model to generate false information that is difficult to distinguish from reality.

Proposed Solutions

Dragan and her team at AI Safety and Alignment plan to address these challenges by:

  • Considering human cognitive biases in training data: This is to prevent GenAI models from reflecting and amplifying existing biases in society.
  • Estimating uncertainty to identify shortcomings: This allows the model to know when it does not have enough information to provide an accurate answer.
  • Monitoring inference to catch failures: This allows the model to detect and correct its own mistakes.
  • Conducting confirmatory dialogue for important decisions: This ensures that humans have control over the decisions made by the model.
  • Tracking the model’s ability to engage in harmful behavior: This allows the AI team to identify and address potential risks before the model is widely used.

The formation of AI Safety and Alignment demonstrates Google’s commitment to AI safety. However, the organization still faces many challenges to gain public and regulatory trust. The success of the organization will depend on its ability to develop effective solutions to address the risks associated with GenAI and AGI.

Also Read: Gemma Model: A New LLM Model that is Lightweight and “Open”