The UK government is pushing companies like OpenAI, Anthropic and Google to explain the inner workings of their respective Large Language Models (LLMs). While the code for some of the models is publicly available, models such as GPT-3.5 and GPT-4 are not, and OpenAI is very reluctant to share many of the details.
The U.K. is preparing to host a new global AI summit that will bring together governments, companies and researchers to examine the risks posed by AI and discuss how to mitigate them.
One of the reasons companies are reluctant to share their internal Dragon 8 is that such behaviour could reveal proprietary information about their products. If malicious actors know more internal information, it could also make artificial LW models vulnerable to cyberattacks.
One of the things the government wants to check is model weights, which define the strength of connections between neurons in different layers of the model, according to the FT. Currently, AI companies are not required to share these details, but there have been calls for more transparency in this area.
The UK will hold its first summit at Bletchley Park in November. Bletchley Park holds an important place in the history of computing, as it was here that Nazi information was decrypted. The Turing Test, which is related to artificial intelligence, is named after Alan Turing, who also cracked codes there.
The Financial Times notes that Google’s DeepMind, OpenAI and Anthropic all agreed in June to open up their models to the UK government for research and security purposes. Unfortunately, at the time, the parties did not agree on the scope and technical details of the openness. Now, the government is asking for quite a bit of openness.
At the end of the day, for the summit to be a success, attendees must have a full understanding of how the models work so that they can better understand their dangers, whether they will have sufficient access to them is another question.