The world faces an existential threat, if we are to go with the continuous stream of articles on artificial intelligence.
Alarm levels were raised when Geoffrey Hinton, a 75-year-old British-Canadian scientist and a winner of the Turing Award (dubbed the Nobel Prize of computing), resigned from Google in March after more than a decade of leading AI research in the Silicon Valley-based company.
“I decided to shout fire. I don’t know what to do about it or which way to run,” Hinton, widely referred to as the Godfather of AI, told students at King’s College in Cambridge University. And in a post-resignation interview with The New York Times, he expressed some regret about his life’s work.
Amid the frenetic pace of technological innovation and the proliferation of intentional disinformation, governments around the world realize the need for clear rules and a coordinated global response to the unstoppable rise of AI.
The biggest issues surrounding AI:
-
- Ethical and responsible use
- Education and workforce development
- Collaborative research and innovation
- Regulatory frameworks
- International cooperation:
- Truth and fairness
Because of its serious and wide-ranging implications on our everyday lives, several nations have set out guidelines and principles to ensure AI is deployed in a manner that respects human rights, privacy, and fairness.
In 2016, the European Union passed the General Data Protection Regulation (GDPR), which enforces strict rules on data protection and transparency, thereby addressing concerns related to privacy. Meanwhile, the United States established the National AI Research Resource Task Force in 2020 to promote responsible AI research and development.
Meanwhile, to address the growing demand for AI expertise, countries have begun investing in education and workforce development initiatives. Governments are forming partnerships with schools and the private sector to offer AI-related courses, certifications, and training programs. By equipping their citizens with the necessary skills, countries want to have a workforce ready for an AI-driven future.
In Canada, the Vector Institute works closely with universities and industry partners to foster AI research and develop talent; while China has implemented nationwide AI education programs to nurture skilled professionals.
Recognizing that the challenges posed by AI exist on a global scale, countries are engaging in collaborative research and innovation efforts.
A three-year-old initiative of Canadian Prime Minister Justin Trudeau and French President Emmanuel Macron, the Global Partnership on Artificial Intelligence (GPAI) addresses shared concerns, promotes responsible AI development, and facilitates information sharing. By pooling their resources and expertise, members can collectively tackle complex issues like bias and accountability.
As of 2023, GPAI has 29 members: Argentina, Australia, Belgium, Brazil, Canada, Czech Republic, Denmark, France, Germany, India, Ireland, Israel, Italy, Japan, Mexico, the Netherlands, New Zealand, Poland, the Republic of Korea, Senegal, Serbia, Singapore, Slovenia, Spain, Sweden, Türkiye, the United Kingdom, the United States and the European Union.
On the domestic front, individual countries are starting to establish regulatory frameworks to govern AI development and deployment. While the strategies may vary, every country knows they must strike a balance between innovation and safety.
The United Kingdom is adopting sector-specific regulations, such as those for autonomous vehicles; while others, like Singapore, are implementing comprehensive frameworks that cover various AI applications.
Given the nature of the challenges, countries recognize the importance of international cooperation. For instance, the Organization for Economic Cooperation and Development (OECD) has developed the OECD Principles on AI to guide countries in shaping national strategies. And during an official visit to Washington DC earlier this month, British Prime Minister Rishi Sunak called for a global summit on AI safety later this year.
Among the more immediate concerns of technology experts is engineered bias via AI, which is now capable of generating hard-to-detect imitations of text, images and voice that have the potential of destabilizing disinformation during elections.
Over the past year, the chatbot phenomenon has also sparked serious worries as the number of users of ChatGPT reached 100 million within just two months of its launch.
In this new incarnation of man vs. machine, the question is: Who will be in control?