Computer says: Yes! – AI Industry wants to be regulated … But not too much

Will there be AI regulation, and if so, in what form?

On May 16th, 2023, the first senate hearing on Artificial Intelligence (AI) oversight was held during which there seemed to be a clear consensus among industry participants that some form of regulation of AI would be desirable. A few days later the leaders of the G7 included AI governance and interoperability in their final communique, showing that AI is firmly on the agenda across the globe. All this talk of regulation of a relatively new industry, which seems disruptive but of which the true effects are yet unknown (and difficult to anticipate and regulate), thus seems all the more remarkable. As a few Senators noted during the hearing, this consensus by industry itself that regulation is needed is unique. This raises the question: why is there such a strong urge to regulate?

At the hearing, the AI industry was represented by the CEO of OpenAI (Sam Altman), the Chief Privacy & Trust Officer of IBM (Christine Montgomery) and noted AI expert prof. Gary Marcus. In their testimony and ensuing discussions there was a specific focus on concerns about:

  1. AI capabilities (i.e. election manipulation, misinformation, etc.);
  2. privacy and copyright (i.e. how the AI models are trained and what data is used); and
  3. how AI can be regulated as so little is known and understood about its effects.

In addition, there was wide agreement from the side of the legislators that the (perceived) mistakes that were made regarding the regulation of social media platforms (i.e. the Communication Decency Act Section 230) must be avoided. This sentiment was echoed by all industry participants.

Manipulation and Election Interference

There was great concern about the capability of AI to manipulate users and potentially interfere with elections through its ability to easily produce swaths of misinformation (or as OpenAI put it, “photoshop on steroids”). All participants agreed that regulation on this issue would be needed in combination with better education of the public.

With regard to the issue of manipulation, Prof. Marcus noted that because so much is unclear about what these models do and what data is used (e.g. the opaqueness means that there is a risk of bias in the data sets), combined with the ability of these systems to shape beliefs and perceptions, it would be too dangerous to leave control over such capabilities to a few for profit companies. The temptation to revert to some form of commercial exploitation would simply be too great.

Privacy and Copyright

The protection of personal data and copyright was discussed in relation to the data sets that need to be used by the AI models. There was some consensus that a new privacy law might be needed to better regulate the data that is being used by the models.

All industry participants also seemed to agree that people have a right to their own data – if truly personal – and should be able to exclude themselves from being included in any data sets to the extent that personal data is used. Similarly, all industry participants agreed that all creators should be rewarded for their work, though how such a system could be put in place remained unclear, as did the boundary with fair and transformative use cases.

Regulations, Licensing schemes and an AI Regulator

The majority of the hearing was spent on future directions of potential regulation. IBM noted that regulations will be key for the creation of public trust in the technology. It would like to see regulations that are focused on points where the technology meets the end-user and should emphasize issues such as transparency (i.e. disclosure of the model, the governance of the model and the data that was used), and accountability. Such regulations should also make a distinction based on use cases, in IBM’s vie so that “high risk” activities are regulated stricter than “low risk” activities. OpenAI agreed that providing transparency (such as the values of the model) is important, but noted that regulations should not become unduly burdensome on smaller scale companies. It proposed a distinction for regulatory burdens based on capability of a model and computing power.

This is a significant divergence, as the true meaning of transparency, consent, and control are not yet settled, and visions of the balance between interoperability and end-user disclosure differ. The same is true of consent and the relationship between transparency and consent.

All industry participants agreed that “nutrition labeling” (i.e. providing basic information to the consumer on the model, the data and other relevant issues) could be a good way to gain the trust of the end-user, which moved the discussion on to how oversight could best take place, and whether the current regulators are capable of providing such oversight.

Prof. Marcus noted that while the FCC could perform some of the work that is required, but would still fall short of what is needed to induce a level of trust that is required. As such he suggested international cooperation on AI standards if only so that companies would not need to adjust their models for each country separately. OpenAI agreed and suggested that a global body modeled after the IAEA should be created. Only IBM disagreed, feeling that the current regulators are quite capable of effective oversight and cited the EU AI Act as a regulatory system that governs for different risks without unduly burdening the industry or creating a new regulator.

On what powers any such AI agency should have, OpenAI and Prof. Marcus agreed that it should have a clear remit to provide oversight on: (i) critical capabilities (e.g. persuasion, manipulation, influence, etc.); and (ii) the administration of an industry licensing system (with the power to revoke licenses). It would be up to the agency to determine what capabilities and scale are required for oversight. This is relevant as developments at the most sophisticated end are limited to only a few companies of scale (due to resources and cost). This, as OpenAI noted, has some benefit to a regulator as it limits oversight to only a few established companies.

Subsequent Developments

Subsequent to the hearing, developments regarding the regulation of AI have moved quickly. At the G-7 in Hiroshima, it was noted how rules governing technologies such as AI had not kept pace with reality. Issues of misinformation through the use of AI were cited as a possible concern (albeit without quite being defined) and the creation of greater trust in AI was said to be needed. In order to have some alignment among members on norms and standards the “Hiroshima AI process” was established (for which officials of the G-7 will meet for the first time at the end of May).

Not long after that, the world got its first taste of how quickly AI generated misinformation can spawn, spread, and cause damage. On 23 May, an AI generated fake image of an explosion at the Pentagon surfaced on a verified Twitter account. Within no time $500 billion had disappeared from the US stock market, giving us a foretaste of what disruption AI-generated misinformation can cause.

A few days after that the President of Microsoft (Brad Smith) gave Microsoft’s view on AI regulation. He felt AI regulation would be needed and proposed measures such as: (i) a licensing requirement for highly capable AI; (ii) a licensing requirement for AI data centers; (iii) labeling for AI created content; and (iv) safety brakes for AI in critical infrastructure. All these proposals should help to induce trust in the technology.

Meanwhile, OpenAI released a blog posting again reiterating its view that a global regulator is needed. The EU Commission in the meantime had a meeting with the CEO of Alphabet (Sundar Pichai) to discuss their intention of concluding a voluntary AI-pact. This voluntary pact would serve as a stop-gap until further regulations come in and multilateral guardrails are in place.

A few days after that IBM released its view on, as it had previously stated, why a new regulator would be superfluous. It stated that instead, every agency should be made AI ready and be supported by appropriate legislation and budgets.


Although there is a lot of talk of regulation of AI, much remains uncertain. For starters, the mere definition of AI is still unclear in most jurisdictions. The same is true of core concepts including personal data, transparency, and consent. As the short overview above demonstrates, the developments of a mere week could practically fill a book. For the moment the EU AI Act is the only concrete proposal for legislation, but whether convergence with or divergence from the EU model is desirable – and if so, desired – remains to be seen.

But the main issue remains that the technology is so new and changing so quickly that it is questionable whether specific laws at this stage are able to keep up (the EU AI Act has already needed some adjustments to accommodate some of the newest developments). As Prof. Marcus noted during the hearing, the tools that are required for oversight of this technology have not even been invented yet, so what chance of real oversight is there?

There is a notable emphasis on the next step of development being richer and better data sets to train the models even better. Data is the fuel of this industry. The EU and US know full well that this means they have a key role to play in the way in which AI can be managed. Stricter privacy laws and copyright laws can have onerous effects on the growth of the AI companies but may at the same time be necessary to facilitate the trust that the AI companies require in order for them to obtain such data sets. There is work to be done on exactly what these laws mean in the context of AI, and how to define them suitably for the new technologies. Certainly, issues with the last generation of technology, such as the difficulty in deciphering the precise meaning of GDPR, ought to be avoided.

For the moment the most established models are in the hands of the traditional big tech companies, i.e. OpenAI / Microsoft, Google and Meta. Although there are some challengers on the market (most notably the open-source AI Anthropic – though it is backed by Google) it remains to be seen how effectively these models can compete. Time will tell. Whereas a few months ago there seemed to be a set belief that LLMs would be too expensive and cumbersome to run for smaller companies, the developments around LORA seem to disprove this. There, a team of researchers built a model similar to ChatGPT for a mere $600 – yes, you read that correctly, $600, not $600m! This ability to deploy competing solutions shows the great promise of the technology, if applied under a suitable risk-based and evidence-based framework: Web 2.0 is replaced by a decentralized Web 3.0 model based on neutral risk governance principles and no gatekeepers. As such, a cynic would note that this push for regulation might well be a play by the more established tech companies at regulatory capture, creating such huge barriers to entry that it becomes impossible to for upstarts to compete.

When one assesses the sincerity of this desire to be regulated, context is important. OpenAI is, at the moment, a market leader with the backing of Microsoft. Its CEO, whilst on his tour through Europe, was quick to mention that if the regulatory environment got too onerous in the EU he would consider withdrawing his operations in its entirety, only to come back from this statement a few days later. Similarly, IBM keeps insisting on regulations where the technology meets the end-user, when it, being an enterprise services provider, has no contact with end-users at all. So a healthy dose of skepticism is perhaps appropriate.

This means that regulators must keep an equally open mind. Whereas OpenAI noted during the hearing that there may be a benefit for a regulator regulate just a handful of operational models, this reality could very well change very quickly. With so much uncertainty remaining, perhaps the best advice comes from Prof. Emily Bender who said that instead of coming up with new laws and agencies, perhaps enforcers should consider better defining out existing vagueness in existing laws, rather than reinventing the wheel.

Jeff Senduk is Special Counsel to Dnes & Felver, where he focuses on the legal responses to emergent technologies.

* * * * *

Leave a Reply

Your email address will not be published. Required fields are marked *