No More Kings? Due Process and Regulation Without Representation Under the UK Competition Bill

Reposted from invited submission to Truth on the Market.

What should a competition law for the 21st century look like? This point is debated across many jurisdictions. The Digital Markets, Competition, and Consumers Bill (DMCC) would change UK competition law’s approach to large platforms. The bill’s core point is to place the UK Competition and Markets Authority’s (CMA) Digital Markets Unit (DMU) on a statutory footing with relaxed evidentiary standards to regulate so-called “Big Tech” firms more easily. This piece considers some areas to watch as debate regarding the bill unfolds.

Evidence Standards

Since the Magna Carta, the question of evidence for government action has been at the heart of regulation. In that case, of course, a jury of peers decided what the government can do. What is the equivalent rule under the DMCC?

The bill contains a judicial-review standard for challenges to DMU evidence. This amounts to a hands-off approach, and is philosophically quite some distance from the field in Runnymede where King John signed Magna Carta. It is, instead, the social-democratic view that an elite of regulators ought to be empowered for the perceived greater good, subject only to checking that they are within the scope of empowerments and that there is a rational connection between those powers and the decision made. There is, in other words, no jury of peers. On the contrary, there is a panel of experts. And on this view, experts decide what policy to pursue and weigh up the evidence for regulation.

This would be wonderful in a world where everyone could always be trusted. But there are risks in this generosity, as it would also allow even quite weak evidence to prevail. For every Queen Elizabeth II, there is a King John. What happens if a future King John takes over a DMU case? Could those affected by weak evidence standards, or capricious interpretations, push back?

That will not be easy. The risk derives from the classic Wednesbury case, which is the starting point for judicial review of agency action in the UK. The case has similarities to Chevron review in the United States, but without the subsequent developments like the analysis of whether policy is properly promulgated by the agencies, following West Virginia v EPA.

Wednesbury requires a determination to be proven irrational before a court can overturn it. This is a very high bar and amounts to only a sanity test. Black cannot be white, but all shades of grey must be accepted by the court, even if the evidence points strongly against the interpretation. For example, consider the question: is there daylight? There is a great difference between an overcast day, and a sunny day, and among early dawn, midday, and late dusk. Yet on a Wednesbury approach, even the latest daylight hour of the darkest day must be called “sunlight” as, yes, there is daylight. It is essentially a tick-box approach. It trusts the regulator completely on policy: in this case, what counts as bright enough to be called daylight.

At some level, this posture barely trusts the courts at all. It thus foregoes major checks and balances that can helpfully come from the courts. It is myopic, in that sometimes a fresh and neutral pair of eyes is important to ensure sensible, reasonable, and effective approaches. All of us have sometimes focused on a tree and not seen the forest. It can be helpful for a concerned friend to tell us that, provided that the friend is fair, reasonable, and makes the comment based on evidence—and gives us a chance to defend our decision to look only at particular trees.

There has been no suggestion that this fair play is lacking from UK courts, so the bill’s hostility to the tribunal’s role is puzzling. Surely, the DMCC’s intention is not to say: leave me alone when you think I am going wrong?

This has already been criticised in influential commentary, e.g., Florian Mueller’s influential FOSS Patents blog post on the CMA’s recent decision to block the merger of Microsoft and Activision. It is the core reason for the active positions in both the Activision case and the earlier Meta/Giphy case in which, despite a CMA loss on procedural aspects, all policy grounds and evidentiary interpretation withstood challenge.

This will have major implications for worldwide deals and worldwide business practices, not least as it could effectively displace decisions by other jurisdictions to assess evidence more closely, or not to regulate certain aspects of conduct.

There is also the important point that courts’ ability to review evidence has sometimes been very positive. In a nutshell, if the case for regulation is strong, then there should be no problem in the review of evidence by a neutral third party. This can be seen in the leading case on appeal standards in UK telecoms regulation, BT and CityFibre v Ofcom, which—prior to the move to judicial review for such cases—involved deregulation to help encourage innovation in regional business centers (Leeds, Manchester, Birmingham, etc.).

Overreach by Ofcom—in the form of a predatory low-price cap—was holding back regional business development, because it was not profitable to invest in higher value but also higher price next-generation communications systems. This was overturned because of the use of an appeal standard pointing out errors in the evidence base; notably, a requirement for there to be as many as five rivals in an area before it was to be considered competitive, which simply contradicted relevant customer evidence. It is very unlikely that this helpful result would have obtained had the matter been one for hands-off judicial review.

Evidence Framework

Closely related to the first point on judicial review is the question of affirmative evidence standards. Even under a judicial-review standard, the DMU must still apply the factors in the bill. There are significant framings of evidence in the DMCC.

The designation process

This emphasises scale. A worry here might be that scale alone displaces the analysis of affirmative evidence—i.e., “big is bad” analysis. What if, as in the title of the recent provocative book, sometimes Big is Beautiful? That thought seems to be lacking from bill (see s.6(1)(a)). As there is a scenario where companies are large, but still competitively constrained, it would be helpful to consider the consumer impacts at the designation stage. There is no business regulating a company just because it is large if the outcomes are good.

The framing of the countervailing benefit exemption

The bill seeks to provide voice to consumer impacts in its approach to conduct regulation, but the bar is set high. There must be proof of indispensable conduct required for, and proportionate to, a consumer benefit, under s.29(2)(c).

This reverses the burden of proof; companies must prove that they benefit consumers. Normally, this is simply left to the market, unless there is market power. You and I buy products in the marketplace, and this is how consumer benefit is assessed.

In a scenario where this cannot be proven, s.20 would allow conduct orders to require “fair and reasonable terms” (s.20(2)(a)). It does not say to whom or according to whom. This risks allowing the DMU to require reasonable treatment of other businesses, unless the defendant company can prove that consumers benefit. There are strong arguments that this risks harming consumers for the sake of less-efficient competitors.

Consumer evidence

S.44(2) allows, but certainly does not mandate, considering consumer benefits before imposing a pro-competition intervention (PCI). Under s.49(1), such a PCI would have the sweeping market investigation powers in Schedule 8 of the Enterprise Act 2002, which extend to rewriting contracts (Sch 8, rule 2), setting prices (Sch 8, rules 7 and 8) or even to even breaking up companies (Sch 8, rules 12 and 13). It is therefore essential that the evidence base be specified more precisely. There must be a clear link back to the concern that gave rise to the PCI and why the PCI would improve it. There is reference to the ability to test remedies in s.49(3) and (4), but this is not mandatory. Without stronger evidentiary requirements, the PCIs risk discretionary government control over large companies.

Given the breadth of these powers, it would be helpful to require affirmative evidence in relation to asserted entry barriers and competitive foreclosure. If there is truly a desire to dilute the current evidence standards, then what remains could still be specified. Not specifically requiring evidence of impacts on entry and foreclosure, as in the current proposal, is unwise.

Prohibited Conduct

The contemplated codes of conduct could have far-reaching consequences. Risks include inadvertent prohibitions on the development of new products and requirements to stop product development where there is an impact on rivals. See especially s.20(3)(b) (own-product preference), and (h) (impeding switching to others), which arguably could encompass even pro-competitive product integration. There is an acute need for clarification here, as product development and product integration frequently affect rivals, but it is also important for consumers and other innovative businesses.

It is risky to use overly broad definitions here (e.g., “restricting interoperability”) without saying more about what makes for stronger or weaker cases for interoperation (both scenarios exist). Interoperability is important, but evidence relating to it would benefit from definition. Otherwise:

  • Bill s.20(3)(e) could well capture improvements to product design;
  • Weasel words like “unfair” use of data (s.20(3)(g)) and “users’… own best interests [according to the DMU]” (s.20(2)(e)) are ripe for rent-seeking; and
  • A particular risk with the concept of “using data unfairly” in s.20(3)(g) is that it could be abused to intervene in data-driven markets on an unprincipled basis.

For example, the data provision could easily be used to hobble ad-funded business models that compete with legacy providers. There are tensions here with the stated aim of the legislative consultation, which was to promote, and not to inhibit, innovation.

A simple remediation here would be to apply a balance-of-evidence test reading on consumer impact, as currently happens with “grey list” consumer-protection matters: the worst risks are “blacklisted” (e.g., excluding liability for death) but more equivocal practices (hidden terms, etc.) are “grey listed.” They are illegal, but only where shown, on balance, to be harmful. That simple change would address many of the evidence concerns, as the structure for evidence weighing would be clarified.

Process protections

The multi-phase due-process protections of the mergers and market-investigations regimes are notably lacking from the conduct and PCI frameworks. For example, a merger matter uses different teams and different timeframes for the initial and final determinations of whether a merger can proceed.

This absence is no surprise, as a major reform elsewhere in the DMCC is to overturn the Competition Appeal Tribunal decision in Apple v CMA, where the CMA had not applied market-investigation timing requirements as interpreted by the Competition Appeal Tribunal, and thus failed statutory timing requirements. The time limits there are designed to prevent multiple bites of the cherry and to prevent strategic use of protracted threats of investigation.

The bill would allow the CMA more flexibility than under the existing market-investigation regime. Is the CMA really asking to change the law, having failed to abide by due-process requirements in an existing one? That would be a bit like asking for a new chair, having refused to sit on a vacant chair right in front of you. Unless this is clarified, the proposal could be misread as a due-process exemption, precisely because the DMU does not want to give due process.

The DMCC’s proponents will argue that the designation process provides timeframes and a first phase element in the cases of “strategic market status” (SMS) firms, with conduct and PCI regulation to follow only if a designation occurs. This, however, overlooks a crucial element: the designation process is effectively a bill of attainder, aimed at particular companies. Where, then, are the due-process rights for those affected? Logically, protections should therefore exceed those in the Enterprise Act market-investigation setting, as those are marketwide, whereas DMU action is aimed at particular firms.

A very sensible check and balance here would be for the DMU to have to make a recommendation for another CMA team to review, as is common in merger-clearance matters.

Benchmarking and Reviews

The proposal contains requirements for review (e.g., s.35 on review of conduct enforcement). The requirements are, however, relatively weak. They amount to an in-house review with no clear framework. There is a very strong argument for a different body to review the work and to prevent mission creep. This may even be welcome to the DMU, as it outsources review work.

The standard for review (e.g., benefits to end users) ought to be clearly specified. The vague reference to “effectiveness” is not this, and has more in common with EU law (e.g.Toshiba) where “effectiveness” of regulation is determined chiefly by the state, and not by the law. (The holding in Toshiba being that of several interpretations, the state is entitled to the most “effective” one, according to… the state.) To the extent that one hopes that the common law regulatory tradition differs, it is puzzling to see the persistence of this statist approach following UK independence from the EU. Entick v Carrington, the DMCC is not.

Other important benchmarking includes reviews of the work of other jurisdictions. For example, the DMU ought not to be given powers that exceed those of EU regulators. Yet arguably, the current proposal does exactly this by omitting some of the structured evidence points in the EU’s Digital Markets Act. There is also a need to ensure international-comity considerations are given due weight, given the broad jurisdictional tests (s.4: UK users, business, or effect). Others—including, notably, jurisdictions from which the largest companies originate—may make different decisions to regulate or not to regulate.

In the case of UK-U.S. relationship, there have been some historic disagreements to this effect. For example, is the DMU really to be the George III of the 21st century, telling U.S. business what to do from across the sea? It is doubtful that this is intended, yet some of the commitments packages already have worldwide effect. Some in America might just say: “No more kings!”

Those with a long memory will remember how strenuously the UK government pushed back on perceived U.S. overreach the other way, notably in the Freddie Laker v British Airways antitrust litigation of the 1980s, and in the 1990s, in the amicus brief submitted by the UK government in Hartford Fire Insurance v California—at the U.S. Supreme Court, no less. It is surely not intended that the UK objected to de facto U.S. and Californian regulation of Lloyds of London, yet wishes to regulate U.S. tech giants on a de facto worldwide basis under UK law?

Public opinion will not take kindly to that type of inconsistency. To the extent that Parliament does not intend worldwide regulation—a sort of British Empire of Big Tech regulation—the extent of the powers ought to be clarified. Indeed, attempting worldwide regulation would very predictably fail (e.g., arms races in regulation between the DMU and EU Commission). An EU-UK regulation race would help nobody, and it can still be avoided by attention to constructive comity considerations.

As the DMCC makes its way through parliamentary committees, those with views on these points will have an excellent opportunity to make themselves known, just as the CMA has done in recent global deals.

Computer says: Yes! – AI Industry wants to be regulated … But not too much

Will there be AI regulation, and if so, in what form?

On May 16th, 2023, the first senate hearing on Artificial Intelligence (AI) oversight was held during which there seemed to be a clear consensus among industry participants that some form of regulation of AI would be desirable. A few days later the leaders of the G7 included AI governance and interoperability in their final communique, showing that AI is firmly on the agenda across the globe. All this talk of regulation of a relatively new industry, which seems disruptive but of which the true effects are yet unknown (and difficult to anticipate and regulate), thus seems all the more remarkable. As a few Senators noted during the hearing, this consensus by industry itself that regulation is needed is unique. This raises the question: why is there such a strong urge to regulate?

At the hearing, the AI industry was represented by the CEO of OpenAI (Sam Altman), the Chief Privacy & Trust Officer of IBM (Christine Montgomery) and noted AI expert prof. Gary Marcus. In their testimony and ensuing discussions there was a specific focus on concerns about:

  1. AI capabilities (i.e. election manipulation, misinformation, etc.);
  2. privacy and copyright (i.e. how the AI models are trained and what data is used); and
  3. how AI can be regulated as so little is known and understood about its effects.

In addition, there was wide agreement from the side of the legislators that the (perceived) mistakes that were made regarding the regulation of social media platforms (i.e. the Communication Decency Act Section 230) must be avoided. This sentiment was echoed by all industry participants.

Manipulation and Election Interference

There was great concern about the capability of AI to manipulate users and potentially interfere with elections through its ability to easily produce swaths of misinformation (or as OpenAI put it, “photoshop on steroids”). All participants agreed that regulation on this issue would be needed in combination with better education of the public.

With regard to the issue of manipulation, Prof. Marcus noted that because so much is unclear about what these models do and what data is used (e.g. the opaqueness means that there is a risk of bias in the data sets), combined with the ability of these systems to shape beliefs and perceptions, it would be too dangerous to leave control over such capabilities to a few for profit companies. The temptation to revert to some form of commercial exploitation would simply be too great.

Privacy and Copyright

The protection of personal data and copyright was discussed in relation to the data sets that need to be used by the AI models. There was some consensus that a new privacy law might be needed to better regulate the data that is being used by the models.

All industry participants also seemed to agree that people have a right to their own data – if truly personal – and should be able to exclude themselves from being included in any data sets to the extent that personal data is used. Similarly, all industry participants agreed that all creators should be rewarded for their work, though how such a system could be put in place remained unclear, as did the boundary with fair and transformative use cases.

Regulations, Licensing schemes and an AI Regulator

The majority of the hearing was spent on future directions of potential regulation. IBM noted that regulations will be key for the creation of public trust in the technology. It would like to see regulations that are focused on points where the technology meets the end-user and should emphasize issues such as transparency (i.e. disclosure of the model, the governance of the model and the data that was used), and accountability. Such regulations should also make a distinction based on use cases, in IBM’s vie so that “high risk” activities are regulated stricter than “low risk” activities. OpenAI agreed that providing transparency (such as the values of the model) is important, but noted that regulations should not become unduly burdensome on smaller scale companies. It proposed a distinction for regulatory burdens based on capability of a model and computing power.

This is a significant divergence, as the true meaning of transparency, consent, and control are not yet settled, and visions of the balance between interoperability and end-user disclosure differ. The same is true of consent and the relationship between transparency and consent.

All industry participants agreed that “nutrition labeling” (i.e. providing basic information to the consumer on the model, the data and other relevant issues) could be a good way to gain the trust of the end-user, which moved the discussion on to how oversight could best take place, and whether the current regulators are capable of providing such oversight.

Prof. Marcus noted that while the FCC could perform some of the work that is required, but would still fall short of what is needed to induce a level of trust that is required. As such he suggested international cooperation on AI standards if only so that companies would not need to adjust their models for each country separately. OpenAI agreed and suggested that a global body modeled after the IAEA should be created. Only IBM disagreed, feeling that the current regulators are quite capable of effective oversight and cited the EU AI Act as a regulatory system that governs for different risks without unduly burdening the industry or creating a new regulator.

On what powers any such AI agency should have, OpenAI and Prof. Marcus agreed that it should have a clear remit to provide oversight on: (i) critical capabilities (e.g. persuasion, manipulation, influence, etc.); and (ii) the administration of an industry licensing system (with the power to revoke licenses). It would be up to the agency to determine what capabilities and scale are required for oversight. This is relevant as developments at the most sophisticated end are limited to only a few companies of scale (due to resources and cost). This, as OpenAI noted, has some benefit to a regulator as it limits oversight to only a few established companies.

Subsequent Developments

Subsequent to the hearing, developments regarding the regulation of AI have moved quickly. At the G-7 in Hiroshima, it was noted how rules governing technologies such as AI had not kept pace with reality. Issues of misinformation through the use of AI were cited as a possible concern (albeit without quite being defined) and the creation of greater trust in AI was said to be needed. In order to have some alignment among members on norms and standards the “Hiroshima AI process” was established (for which officials of the G-7 will meet for the first time at the end of May).

Not long after that, the world got its first taste of how quickly AI generated misinformation can spawn, spread, and cause damage. On 23 May, an AI generated fake image of an explosion at the Pentagon surfaced on a verified Twitter account. Within no time $500 billion had disappeared from the US stock market, giving us a foretaste of what disruption AI-generated misinformation can cause.

A few days after that the President of Microsoft (Brad Smith) gave Microsoft’s view on AI regulation. He felt AI regulation would be needed and proposed measures such as: (i) a licensing requirement for highly capable AI; (ii) a licensing requirement for AI data centers; (iii) labeling for AI created content; and (iv) safety brakes for AI in critical infrastructure. All these proposals should help to induce trust in the technology.

Meanwhile, OpenAI released a blog posting again reiterating its view that a global regulator is needed. The EU Commission in the meantime had a meeting with the CEO of Alphabet (Sundar Pichai) to discuss their intention of concluding a voluntary AI-pact. This voluntary pact would serve as a stop-gap until further regulations come in and multilateral guardrails are in place.

A few days after that IBM released its view on, as it had previously stated, why a new regulator would be superfluous. It stated that instead, every agency should be made AI ready and be supported by appropriate legislation and budgets.

Conclusion

Although there is a lot of talk of regulation of AI, much remains uncertain. For starters, the mere definition of AI is still unclear in most jurisdictions. The same is true of core concepts including personal data, transparency, and consent. As the short overview above demonstrates, the developments of a mere week could practically fill a book. For the moment the EU AI Act is the only concrete proposal for legislation, but whether convergence with or divergence from the EU model is desirable – and if so, desired – remains to be seen.

But the main issue remains that the technology is so new and changing so quickly that it is questionable whether specific laws at this stage are able to keep up (the EU AI Act has already needed some adjustments to accommodate some of the newest developments). As Prof. Marcus noted during the hearing, the tools that are required for oversight of this technology have not even been invented yet, so what chance of real oversight is there?

There is a notable emphasis on the next step of development being richer and better data sets to train the models even better. Data is the fuel of this industry. The EU and US know full well that this means they have a key role to play in the way in which AI can be managed. Stricter privacy laws and copyright laws can have onerous effects on the growth of the AI companies but may at the same time be necessary to facilitate the trust that the AI companies require in order for them to obtain such data sets. There is work to be done on exactly what these laws mean in the context of AI, and how to define them suitably for the new technologies. Certainly, issues with the last generation of technology, such as the difficulty in deciphering the precise meaning of GDPR, ought to be avoided.

For the moment the most established models are in the hands of the traditional big tech companies, i.e. OpenAI / Microsoft, Google and Meta. Although there are some challengers on the market (most notably the open-source AI Anthropic – though it is backed by Google) it remains to be seen how effectively these models can compete. Time will tell. Whereas a few months ago there seemed to be a set belief that LLMs would be too expensive and cumbersome to run for smaller companies, the developments around LORA seem to disprove this. There, a team of researchers built a model similar to ChatGPT for a mere $600 – yes, you read that correctly, $600, not $600m! This ability to deploy competing solutions shows the great promise of the technology, if applied under a suitable risk-based and evidence-based framework: Web 2.0 is replaced by a decentralized Web 3.0 model based on neutral risk governance principles and no gatekeepers. As such, a cynic would note that this push for regulation might well be a play by the more established tech companies at regulatory capture, creating such huge barriers to entry that it becomes impossible to for upstarts to compete.

When one assesses the sincerity of this desire to be regulated, context is important. OpenAI is, at the moment, a market leader with the backing of Microsoft. Its CEO, whilst on his tour through Europe, was quick to mention that if the regulatory environment got too onerous in the EU he would consider withdrawing his operations in its entirety, only to come back from this statement a few days later. Similarly, IBM keeps insisting on regulations where the technology meets the end-user, when it, being an enterprise services provider, has no contact with end-users at all. So a healthy dose of skepticism is perhaps appropriate.

This means that regulators must keep an equally open mind. Whereas OpenAI noted during the hearing that there may be a benefit for a regulator regulate just a handful of operational models, this reality could very well change very quickly. With so much uncertainty remaining, perhaps the best advice comes from Prof. Emily Bender who said that instead of coming up with new laws and agencies, perhaps enforcers should consider better defining out existing vagueness in existing laws, rather than reinventing the wheel.

Jeff Senduk is Special Counsel to Dnes & Felver, where he focuses on the legal responses to emergent technologies.

* * * * *