AI – Charting Rules of the Road | VOANews
News & Politics
Introduction
Artificial intelligence (AI) permeates nearly every facet of our digital lives, but it operates within a framework that often lacks comprehensive regulatory oversight. As technology continues to evolve, bringing forth novel possibilities, one of the most groundbreaking advancements is the rise of generative AI tools. Applications like ChatGPT, Gemini, and LLaMA 2 leverage advanced algorithms to create original and creative content with capabilities previously unimagined. However, the unknown consequences of machine learning have amplified calls for "virtual guardrails" intended to protect consumers.
In a recent executive order, President Joe Biden mandated that AI companies disclose their safety test results. As AI continues to push the limits of human capability and understanding, experts caution against relying solely on companies' self-reported findings. Companies often present an overly optimistic view of their technologies, driven by corporate incentives. If regulators choose to depend on internally generated performance tests and audits, they must establish standards for what information should be made publicly available.
One benchmark proposed is access to detailed usage data from AI platforms, which is critical for evaluating issues such as misinformation risks. Currently, the public and regulators lack insight into how generative models, like GPT-4, are being utilized, including the extent of their use for generating misleading news articles or engaging in cyber activities.
Concerns also arise with companies, such as Meta, claiming to offer open-source accessibility for research while withholding critical information about their training data and algorithms. This lack of visibility hampers comprehensive critique and understanding of the AI technologies.
President Biden's executive order advocates for increased transparency, yet questions regarding enforcement and accountability remain. As the decision-making surrounding AI impacts billions, some experts argue that accountability should extend to corporate executives and shareholders, suggesting potential criminal liability for creating harmful products.
In addition, U.S. government agencies, including the National Institute of Standards and Technology and the Departments of Homeland Security, Energy, and Commerce, are instructed to develop guidelines addressing issues like algorithmic discrimination and distinguishing human-generated content from AI-generated creations.
While there has been a slow trajectory on the legislative front in the U.S., there are recent signs of progress through new strategies leveraging the executive branch for accountability. In contrast, European lawmakers reached significant agreements last year to impose limitations on AI usage, and the U.K. is taking a proactive approach. Prime Minister Rishi Sunak's government is committing resources to establish an AI Safety Institute tasked with independently assessing, monitoring, and testing AI models.
As we broaden the scope of AI's influence, it is essential to avoid the assumption that companies will adequately self-regulate. Governments must assume the responsibility of safeguarding their citizens as they do in other contexts. Additionally, the United Nations has created an international AI advisory board that includes experts from various sectors to recommend global governance strategies for AI aligned with sustainable development goals.
Keywords
- AI
- Generative AI
- Cyber activities
- Misinformation
- Transparency
- Executive order
- Regulation
- Accountability
- Algorithmic discrimination
- AI safety
FAQ
1. What is generative AI?
Generative AI refers to advanced algorithms that can create original content, including text, images, and more, with unprecedented capabilities.
2. Why are regulations needed for AI?
Regulations are necessary to ensure consumer protection, address potential misuse of AI technologies, such as misinformation, and provide accountability for companies.
3. What did President Biden's executive order entail?
President Biden's executive order directed AI companies to disclose results from safety tests of their technologies to enhance transparency and consumer protection.
4. How are European lawmakers addressing AI concerns?
European lawmakers have reached significant agreements to impose limits on how AI can be used, ensuring greater accountability and safer implementation of AI technologies.
5. What role does the U.N. play in AI governance?
The United Nations has established an international AI advisory board composed of experts to recommend strategies for AI governance that align with sustainable development goals.