The undeniable challenges posed by chatbots and AI will not suddenly disappear. This is how the specialised security website Oodaloop put it. And while many well-informed and well-intentioned people have signed an open letter calling for a 6-month pause in advanced AI research, doing so is unrealistic and unwise. The problems posed are complex, confusing and difficult to solve because they are incredibly convoluted and polyhedral. It involves so many stakeholders, intersecting domains and competing interests that it makes it difficult to address them. A pause in technological research will not help solve these human conundrums.
What will help is systematic, methodical and massive public engagement that informs pilot projects associated with the business and civilian implications of artificial intelligence at the national and local levels. Everyone will be affected by the promises and potential dangers of the technological shift in thinking presented by advances in AI. Therefore, everyone should have a voice, and everyone should work to ensure that society is informed and prepared to thrive in a rapidly changing world that will soon look very different.
At first glance, stopping its development may seem compelling given the challenges posed by large language models (LLMs), but there are several reasons why this approach is flawed. To begin with, it is essential to take global competition into account. Even if all U.S. companies agreed to a pause, other countries would continue their AI research, making any national or international agreement less effective.
Secondly, AI diffusion is already underway. The Alpaca experiment at Stanford University demonstrated that it could be refined to match the capabilities of ChatGPT-3 for less than $600. This advance accelerates the spread of AI by making it more accessible to various actors, including those with malicious intent.
Thirdly, history teaches us that a pause in AI could lead to secret development. Publicly halting AI research could prompt nations to conduct advanced AI research in secret, which could have dire consequences for open society. This scenario is similar to that of the Hague Convention of 1899, where the major powers publicly banned poison-filled shells, only to continue their research in secret, and eventually deploy noxious gases during World War I.
Going forward, to effectively address the challenges arising from AI, a proactive, results-oriented and cooperative approach with the public should be encouraged. Think tanks and universities can engage in dialogue with the public about how to work, live, govern and coexist with modern technology that affects society as a whole. By including diverse voices in the decision-making process, we can better address and solve complex AI challenges at the regional and national levels.
In addition, industry and political leaders should be encouraged to participate in the search for non-partisan, multi-sectoral solutions to keep civil society stable. Working together, the gap between technological advances and their social implications can be bridged.
Finally, it is essential to pilot AI schemes in various sectors, such as labour, education, health, law and civil society. We should learn how to create responsible civil environments where AI can be responsibly developed and deployed. These initiatives will help us better understand and integrate artificial intelligence into our lives, reducing risk while ensuring that its potential is realised for the greater good.