The Microsoft-backed technology advocacy group Business Software Alliance (BSA) issued a public release calling for rules governing the use of artificial intelligence based on national privacy legislation.
Image source Pexels
The BSA represents business software companies such as Adobe, IBM and Oracle. Microsoft is one of the leaders in AI, as it recently invested in AI research firm OpenAI, creators of the generative AI (AIGC) chatbot ChatGPT. But Google, another key player in the US advanced AI space, is not a member of the BSA.
The BSA calls for four key protective measures:
— The US Congress should explicitly require when companies must assess the design or impact of AI;
— these requirements should take effect when AI is used to make “significant decisions”, which should also be defined by the US Congress
— Congress should designate an existing federal agency to review whether companies are complying with the rules
— Congress should require companies to develop risk management plans for high-risk AI.
We’re an industry group that wants Congress to pass legislation like this,” said Craig Albright, BSA vice president for U.S. government relations. So we’re trying to bring more attention to this opportunity. We don’t feel that AI regulation has received the attention it deserves.”
Albright added: “This isn’t going to answer all the questions about AI, but it’s an important answer that Congress can give and an important question about AI.”
The introduction of accessible advanced AI tools like ChatGPT has accelerated the push to create a security guardrail for the technology. While a voluntary risk management framework is already in place in the US, many advocates are consistently pushing for stronger protections to be put in place. Meanwhile, Europe is working to finalise its Artificial Intelligence Bill to build safeguards around high-risk AI.
As Europe and China move forward with frameworks to regulate and nurture new technologies, US policymakers need to reflect: Is digital transformation an important part of the economic agenda? If so, we should have a national agenda for driving digital transformation that will include rules around artificial intelligence, national privacy standards and strong cybersecurity policies.”
In the BSA’s message to Congress advising on this, the organisation said that the US Data Privacy and Protection Act, a bipartisan privacy bill passed by the House Energy and Commerce Committee last session, is an appropriate vehicle for developing new AI rules. While the bill still has a long way to go before it becomes law, the BSA says it has provided the right framework for the kind of national AI guardrails the government should establish.
As many expected, the BSA hopes that when the US Data Privacy and Protection Act is reintroduced, it will contain new rules governing AI. Albright says the organisation has already engaged with the House Energy and Commerce Committee on their proposal, and that the committee is “open to many different voices”.
While the US Data Privacy and Protection Act still faces hurdles in becoming law, Albright said it would take a huge effort to pass any piece of legislation. “What we’re trying to say is that it’s doable, it’s something that can be agreed upon in a bipartisan way. So we’re hoping that however they legislate it, that’s going to be part of it.”