ChatGPT, the artificial intelligence conversation model from U.S.-based OpenAI, has sparked global concern as the company’s core investor, Microsoft, begins to assess the future human impact of the latest model behind it, GPT-4. GPT-4 is a large multimodal model that generates human-like language, and it has performed on a variety of standard professional and academic tests with human-level performance, for example, it can pass the mock bar exam and score in the top 10%.
However, there are some potential risks with GPT-4 that could have a negative impact on society and humans. In an open letter, Tesla founder Elon Musk and more than a thousand AI experts and industry executives called for a six-month moratorium on developing systems more powerful than GPT-4 until a shared set of security protocols is developed and implemented, and audited by independent experts. They also cited the economic and political turmoil that could result from advanced AI systems and called on developers to work with policymakers to establish a regulatory body.
The letter, released by the nonprofit Future of Life Institute, was signed by more than 1,000 people, including Musk, Apple co-founder Steve Wozniak and Stability AI CEO Emad Mostak.
The letter says, “Powerful AI systems should only be developed if we are confident that their effects are positive and their risks are manageable.”
Also on Monday, Europol, the European Union’s police department, issued ethical and legal concerns about advanced artificial intelligence such as ChatGPT, warning that such systems could be misused for phishing, disinformation and cybercrime, among other things. Since its release last year, Microsoft-backed OpenAI’s ChatGPT has prompted competitors to launch similar products, and many companies have integrated it or similar technologies into their applications and products.
OpenAI previously said they have spent six months iterating and tweaking GPT-4 to give it the best results (albeit not perfect) in terms of factuality, controllability and refusal to go beyond security.