Artificial intelligence (AI) is stirring a mix of hope and apprehension among workers globally. Recent research by the International Monetary Fund indicates that AI could impact the jobs of four out of every ten employees worldwide, a figure that rises to six out of ten in advanced economies, affecting sectors from telemarketing to law. Furthermore, a new report by the World Economic Forum reveals that 50% of economists now believe AI will become a major commercial disruptor in 2024, a significant increase from 42% in the previous year.
At the 2024 World Economic Forum’s Annual Meeting in Davos, business leaders are putting a spotlight on AI, particularly on its regulation to ensure it serves as a positive force in business and society. Arati Prabhakar, the director of the US White House Office of Science and Technology Policy, emphasized AI’s transformative power at a Davos panel on January 17, advocating for careful management of its risks.
The current discourse on AI governance is at a critical juncture, balancing AI’s benefits against its potential negative impacts, including unintended and harmful uses. Notable examples include software that discriminated against job applicants based on age and AI chatbots generating offensive content.
The debate on AI regulation has been ongoing in regions like the EU and the US, but answers remain elusive. A key dilemma is how to regulate AI effectively without stifling innovation. At Davos, one proposed solution is regulating AI from its inception, focusing on algorithm evaluation and auditing to prevent misuse of data and illegal outcomes. For instance, in the US, federal agencies have suggested quality-control checks for algorithms used in mortgage property evaluations.
However, this approach raises concerns within the industry. Andrew Ng, founder of Deeplearning.AI and a Stanford University professor, warns that overly burdensome regulations on AI development could hinder innovation, create anti-competitive dynamics favoring large tech companies, and slow down the delivery of AI benefits.
Other experts, like Khalfan Belhoul, CEO of the Dubai Future Foundation, argue that direct governance of AI technology might be impractical. Instead, they suggest regulating the effects of AI post-development. This viewpoint aligns with the current scenario, where existing laws, though not specifically designed for AI, are applicable in areas like privacy, cybersecurity, and consumer protection. Brad Smith, vice-chair and president of Microsoft, highlighted this at Davos, noting the overlap between existing laws and new AI-specific regulations.
Wendell Wallach, a senior scholar at the Carnegie Council for Ethics in International Affairs, points out that some industries, like healthcare, already have robust regulations that indirectly govern AI applications. The challenge and the debate continue around finding the right balance in AI governance, ensuring that AI’s immense potential is harnessed responsibly and ethically.