The App Store Litigation: An economic perspective

There is much discussion of the recent App Store litigation between Epic, Apple and Google. Apparently divergent results highlight issues with evidence of market power and the need to apply consistent quantitative analysis, rather than to focus solely on contractual restrictions. This note provides an economic commentary on these prominent cases, and provides recommendations for future cases in which contractual restrictions and network effects interact.

The Epic Litigation

In January, the US Supreme Court declined to hear appeals by both Apple and Epic Games (developers of Fortnite) from antitrust decisions of the District Court for the Northern District of California (Epic Games v. Apple Inc., 559 F. Supp. 3d 898 (N.D. Cal. 2021)) and the 9th Circuit Court of Appeals (Epic Games, Inc. v. Apple, Inc., 73 F.4th 785 (9th Cir. 2023)). This denial leaves Apple held to have broken California’s Unfair Competition Law in relation to blocking without excuse a producer’s right to steer business to its own product.  These were bench trials.

Apple had delisted Fortnite from its iOS App Store and denied Epic’s paying affiliates access to developer tools after Epic included a link directing its gamers to a payment mechanism outside of Apple’s iOS App Store. The link evaded a maximum 30% commission to Apple for sales of Fortnite and subsequent in-app purchases.  Epic sought to lower charges to 12%.  In the District Court, Apple claimed that its marketing and technical support justified the difference in fees and pointed to a wide definition of the market to bolster its defense that it was a competitive large firm, not a monopolist. As part of wider accusations of monopolizing behavior, Epic focused on iOS as a specialist market within smartphone operating systems and claimed that Apple’s iOS fees were unnecessarily high.

The District Court was unpersuaded by Epic’s general case in antitrust (dominance, tying and other claims) and declined to reinstate Fortnite onto the App Store outside Apple’s contract terms, taking the view that Epic had breached its contract and made its own trouble. Judge Rogers did temporarily restrain Apple from blocking Epic’s affiliates’ access to developer tools. More significantly, Judge Rogers ruled for Epic on one antitrust claim: that Apple placed harmful anti-steering provisions in its contracts. The District Court permanently enjoined Apple from blocking all iOS developers’ links to payment mechanisms.

Economic analysis

From an economic perspective, the District Court identified many uncertainties concerning market definitions and the impact of market segments like iOS on other operating systems and monetized platforms.  Apple successfully claimed it was a competitor in a wide market including many alternative apps available from competitors such as Google and operating systems including Android. Stating a narrow view of the market as a specialist area that Apple had monopolized, Epic then ran into difficulties because its games are functional across platforms:

“[N]ot all games” feature cross-platform functionality, and some platforms have taken steps to limit it. Epic Games, Inc. v. Apple Inc., 559 F. Supp. 3d 898 (N.D. Cal. 2021). But when it comes to the games that do offer such cross-platform functionality, app-transaction platforms (like the App Store and Epic Games Store) “are truly competing against one another.” Id. (Epic Games, Inc. v. Apple, Inc., 73 F.4th 785, 787 (9th Cir. 2023)) 

Uncertainties over market definition are particularly intrusive in deconstructing modern, network-creating operating systems and highlight the need for much more investigative economic analysis to quantify the nature of these new markets. The Epic cases did not produce significant quantitative analysis of firm-to-firm marginal impacts, which usually require modeling, estimates of demand elasticities, mark-ups on costs and other variables key to traditional regulatory analysis. Detailed contractual arrangements in the new electronic data and gaming industries, as practiced, are also critical in assessing competition. These are significant gaps in analysis, and future cases would benefit from more detail on these critical economic effects.

The 9th Circuit subsequently affirmed the District Court’s enjoining of Apple’s anti-steering provision concerning all iOS developers.  The Court of Appeals found no abuse of discretion in granting the permanent injunction and regarded the required protection of all developers as necessary to correct the harm from anti-steering. Thus, the courts required Apple to take Fortnite back at its standard fees with the proviso that Epic and any other iOS developer may add payment links to storefronts. The 9th Circuit stayed the lower court’s mandate pending Apple’s appeal to the US Supreme Court, and so the mandate becomes immediately effective now that the Supreme Court has denied certiorari.  Apple’s pricing policy embodied a form of anti-steering considered unacceptable if shown to harm providers and consumers following earlier reasoning in Ohio v. American Express Co. (138 S. Ct. 2274 (2018)).

Consumer welfare impacts

The economics of information and restrictive agreements can be usefully applied to Apple’s delisting of Fortnite and the retaliatory measures targeted at Epic’s affiliates. More information is generally better than less for gamers particularly when the result is lower prices. However, unanswered questions remain:

  • Will Epic’s own payment link increase traffic for Fortnite, or just reduce revenue for Apple while increasing benefits for Epic? That is, was there an effect on total output, or just a movement of output?
  • Moreover, was the removal part of a wider restriction of competition as Epic claimed, or simply a consequence of contract express terms, as accepted in the District Court?
  • Apple claimed, but could it show, that apparently unfair contractual requirements digging into in-app payments can have efficiency purposes, such as incentivizing Apple to do the marketing and keep its Apps and platforms working efficiently?

It seems that it was more Epic’s failure to persuade than contrary proof from Apple that led to the District Court’s decision and the 9th Circuit’s affirmation. Future cases could helpfully examine these and other arguments using detailed economic modeling.

Ancillary restraints

Consistent with an ancillary restraints doctrine, the District Court and Court of Appeals applied a rule-of-reason standard to review Apple and Epic’s disputed agreement, which could be seen as subordinated to a separate transaction (marketing) and as reasonably necessary to achieving that transaction’s pro-competitive purpose (driving consumer benefits). A rule-of-reason approach amounts to a benefit-cost analysis.  Epic Games v. Apple Inc., 493 F. Supp. 3d 817, 836 (N.D. Cal. 2020) summarizes the required analysis:

First, plaintiff must show “diminished consumer choices and increased prices” as “the result of a less competitive market due to either artificial restrains or predatory or exclusionary conduct” by the defendant. Then, “if a plaintiff successfully establishes a prima facie case … by demonstrating anticompetitive effect, then the monopolist may offer a ‘procompetitive justification’ for its conduct.” For example, the monopolist may show “that its conduct is … a form of competition on the merits because it involves, for example, greater efficiency or enhanced consumer appeal.” Finally, if defendant offers a non-pretextual procompetitive justification, the burden shifts back to the plaintiff to rebut defendant’s claim or “demonstrate that the anticompetitive harm of the conduct outweighs the procompetitive benefit.” (Quoting U.S. v. Microsoft Corp., 253 F.3d 34 (D.C. Cir. 2001)).

Epic failed to carry its burden of proof on general claims that Apple has a monopoly on mobile gaming and acted as an illegal monopolist by requiring consumers to get apps through its App Store. Its claims of a narrow post-iOS market segment did not help, tending to direct the courts into a contractual analysis of Apple’s fees, given difficulties in resolving market definition. This was a highly strategic decision: Epic gained the prospect of a narrower market in which market power is easier to prove, but the price of this was that wider evidence of market power is harder to use, as it arises chiefly in the wider market whose analysis is thereby truncated. It may prove fruitful for future litigants to move the needle towards the market power analysis.

Nor did Epic convince the courts of the existence of substantially less restrictive alternatives to Apple’s system. Epic prevailed over anti-steering express terms in the standard contract with Apple because there it could show practices giving plausible financial losses. California’s courts will associate such losses with loss of consumer welfare – although it might be noted that there is no necessary or direct link between rival financial loss and harm to consumer welfare. Epic could, as it claimed, lower prices for gamers by working around Apple’s systems. This shows another key strategic aspect of app store litigation: the argument that there is lost competition helps the plaintiff, but it can also be interpreted as the possibility of switching to what remains of that lost competition. The key to this puzzle is to ensure that the quantitative evidence is strong such that even a partial impairment raises concerns – or, alternatively, to show that such effects are absent, however restrictive the clauses may seem.

As in the Epic case, applying rule-of-reason legal analysis often leads to a qualitative benefit-cost analysis – and not a quantitative one. This has significant implications for litigation, not least, that benchmarking can be expanded; ideally, on a quantitative basis. Epic’s failure to persuade the courts, other than over the issue of steering, seemed not to weigh costs and benefits comparing the status quo with feasible alternatives.  The courts traditionally resort to broad assessments although they are clearly aware of the economic arguments at stake in antitrust cases. These could be significantly expanded to allow a richer analysis of the wider costs and benefits of competitive restrictions with a sharper focus on consumer welfare impacts. Again, this observation highlights the need for much more economic analysis to quantify the impacts and welfare effects of these new markets.

What’s the difference between a Google and an Apple?

For those inclined to look for logic, consistency and comparability in the law, Epic’s litigation foray has been a salutary experience.  In 2023, Epic prevailed in a very similar antitrust case against Google (In re Google Play Store Antitrust Litig., 21-md-02981-JD (N.D. Cal. Mar. 28, 2023)). This jury trial covered similar tying, pricing and exclusionary practice issues as the bench trial with Apple.  With Apple, Epic prevailed on just one issue concerning steering. With Google, Epic prevailed on all its allegations of anticompetitive behavior based on market dominance and restrictive practices. 

It is hard to find significant differences between the two cases and, while interesting, attempts to do so seem more like rationalization than statements of antitrust principle ( That is, for all the attention on Google’s particular actions, it is not clear what exactly the difference in market power would be to justify the differential treatment.

Certainly, Google’s apps are used across many operating systems, which might make it more susceptible to antitrust enforcement than Apple’s more sealed iOS. It is also significant that Google’s was a jury trial; Apple’s was bench.  Google appears to have run sweetheart deals with some users and to have deleted documents needed at trial. But at the end of the day, both Apple and Google have been found to have restricted competition to some extent.  The cases more than anything illustrate the difficulties in unraveling contractual links in the new information industries and a need for much more research, especially on the relationship between contractual restrictions and market power. There is a particular premium on explaining these effects in a jury-friendly way, where relevant.

What’s next for app store analysis?

Finally, this type of case concerning two-sided markets (here, gamers and game developers) is increasingly important as cases spring up in many antitrust tribunals including those in the EU, UK and Australia. In the case of Apple, it will be particularly interesting to see the position taken on the contractual restrictions following the UK CMA’s victory at the Court of Appeal, such that the Mobile Ecosystems case will return. The same issues will also arise as the EU Digital Markets Act takes root.

In all these, and other cases, the relationship between market power and contractual restrictions will be paramount. Litigants will benefit from ensuring that case strategy incorporates economic evidence from the very beginning.

Photo by Christiano Betta 

What next for the UK DMCC? Expert report published with the Legatum Institute

As Parliament returns from its festive break, many competition law eyes will be on the Digital Markets, Competition and Consumers (DMCC) Bill – at the moment “Just a Bill” but probably not so for much longer…

The DMCC: “Just a Bill”but for how much longer?

There are many rich questions as the Bill heads over to the Lords for critical scrutiny. This is the major audience for due process concerns, and the place where technocracy often meets accountability. Critical questions are up for debate including how to frame the relevant evidence rules, the extent to which existing rules should change given developments in online business, and how best to ensure high quality regulation over time. Essentially, the forward-looking question is all about the evidence requirements for future interventions.

As the UK Competition and Markets Authority has become increasingly active in global business in recent years, the law will be relevant well beyond the UK. For example, recent Commitments with Google apply on a worldwide basis. There is also clear global impact from recent merger reviews such as Facebook/Giphy, Microsoft/Activision and Adobe/Figma – not to mention the new investigations into OpenAI, and cloud computing.

Dnes & Felver provided an expert report on the relevant issues to the Legatum Institute, a leading London-based think tank seeking to promote prosperity in the UK.

Stephen Dnes’ co-author, Fred de Fossard, recently commented on Politics Home:

“The last decade has seen the world’s leading antitrust regulators, the CMA, the European Commission, and the Federal Trade Commission in the USA, take a much more interventionist approach to digital markets … even if businesses with large market shares continue to innovate and provide their users and customers with new and improved services, today’s regulators may decide to prosecute them for occupying too great a position in the market… 

This has caused great discord in the digital economy, where entrepreneurs often build businesses with the intention of selling them to a large acquirer, who can take the company and its products to a bigger audience. After all, not all founders are born managers of global companies: their skills often lie in establishing new businesses and new ideas.”

The core point is essential for growth: if large and small businesses sometimes complement each other, then the law must have a mechanism for answering a very difficult question:

When is big bad, and when is big beautiful?

The same theme was noted by Diginomica journalist Chris Middleton, who commented:

“To see ‘digital markets’ as something separate and distinct in 2023 seems almost quaint – a Web 1.0 perspective, three decades too late. What about AI, decentralized services, complex supply chains, cloud, and mobility? Will some Bill address those in 2053?

“While well intentioned, I would argue that the Bill is both 25 years too late and fundamentally misconceived. To see a handful of Big Tech titans as being of ‘strategic market importance’ (SMS), based largely on their size, ignores an obvious problem. Namely, that it is often smaller players, such as OpenAI and Spotify, which are really shaping what the future looks like”

The recommendations in the expert report correspond closely to several of the amendments introduced before Parliament. This complements earlier work with the Institute, which is now reflected in the strategic steer to the UK CMA.

Getting involved

How exactly the law sifts worthy from unworthy cases for intervention may well be the critical competition policy question of the year. For the UK, it will be a once-in-a-generation reform. Moreover, how the DMCC approaches this will have ramifications well beyond the UK – so this is not so much one to watch as one to get involved with.

The Report is available online.

Location, Location, Location: Would your data live on a cloud?

Does it matter where data is processed? Should it? There are some interesting developments taking this question well beyond the familiar questions about data flows between jurisdictions.

What about data use on devices, on servers, and between them? There is a lesser-spotted trend for vertically integrated firms to encourage greater use of (1) on-device processing and (2) to limit the scope for interoperation between data on devices and on servers. These are significant competitive restrictions: they limit competition with no corresponding consumer benefit. They also harm rivals who use the server deployments set to be limited – rivals who may be highly innovative.

Two significant developments threaten the ability to use a range of competing servers:

Server restrictions in the Google Privacy Sandbox

Google’s Privacy Sandbox initiative currently proposes that only Google Cloud or Amazon Web Services (AWS) will be allowed to provide remote processing for the proposed Attribution Reporting API. This amounts to a ban on on-premises server use, that is, using your own server.

This is astonishing. It is like saying that you can lease any car, provided that it is a Ford or a Toyota. What if you would like to own a competing model – say, a VW?

There is simply no ability to do so while using the API as proposed, because it can only be used on a leased basis on the cloud.

This also bakes in the current generation of technology from the largest providers. So much for that innovative electric car you were thinking of trying out… A competing hosting provider simply isn’t allowed to interoperate with the API.

The proposal is all the more remarkable because approximately two thirds of existing deployments are on-premises:

So, the proposal is essentially to force a technological tie between data hosting and advertising systems.

This is all the more concerning because on-premise deployment is considered safer, on average, than cloud. KBV goes on to note:

“many benefits … come with on-premise deployment, including a high level of data protection and safety. Because on-premise deployment models have higher data security and fewer data breaches than cloud-based deployment models, industries prefer them, which fuels industry demand for on-premise deployment models.”

So there is no good reason to exclude the competing alternatives. This is especially so at a time when cloud computing restrictions are under review based on concerns about difficulties in switching.

If you currently use on-premise servers – or, indeed, anything other than Google Cloud or AWS – now would be a very good time to register a concern with the UK Competition and Markets Authority, which is reviewing Google’s proposals.

There is a quarterly reporting cycle with ample scope for concerns to be heard – the sooner the better, so as to influence the current reporting cycle.

Draft EDPB Guidance on Technical Scope

The same theme emerges from some important draft Guidance from the European Data Protection Board (EDPB). This revisits the much-maligned cookie consent box, which derives from Art. 5(3) of the ePrivacy Directive.

The draft Guidelines 2/2023 on Technical Scope of Art.5(3) of the ePrivacy Directive do not trip off the tongue, but their content is highly significant for competing data handlers. The draft extends the cookies analysis to other technologies including pixels and tracking links.

Significantly, there is a partial carve out for on-device storage. This risks a tilt towards those controlling devices, unless the rules are technologically neutral. The proposal is to capture movement into and out of local storage:

“The use of such information by an application would not be subject to Article 5(3) ePD as long as the information does not leave the device, but when this information or any derivation of this information is accessed through the communication network, Article 5(3) ePD may apply.”

That is very helpful to those able to execute local processing – but a tremendous hurdle for those who rely on server-side processing.

As server and on-device processing are indistinguishable from the consumer perspective, the technologically and competitively neutral rule would be to intervene on the basis of a reasonable evidence-based level of consumer protection – with the same rule, whether on-device or on the cloud, or moving between them. That would suggest that consent is not generally required to move data from the device to servers, as consumers are not harmed by this action.

IP addresses are highlighted as potentially requiring consent, without any carve out for innocuous use, such as audience definition. For example, an IP address with coarse location might contain no personal data at all, as where a business address is indicated. But the Guidance seems not to cater to that scenario.

There is also specific comment on the use of identifiers. The draft takes a highly precautionary stance: identifiers are seen often to link to identity – but is this so? Trillions of identifiers are used for innocuous audience matching purposes without any such link. If so, the guidance is over-broad and imposes a consent requirement beyond what is needed for a reasonable level of evidence-based consumer protection.

So, those with interests in the use of data for everyday, harmless but helpful audience optimization may wish to speak up. Comments can be submitted until January 18th.

New article – If the Competition and Markets Authority were an emoji: merger clearance lessons from Meta/Giphy

An expert article co-authored by Partner Stephen Dnes has appeared in the Competition Law Journal: “If the Competition and Markets Authority were an emoji: merger clearance lessons from Meta/Giphy.”

The article reviews the decision by the UK CMA to block Facebook / Giphy, the decision by by-then Meta to challenge this in the Competition Appeal Tribunal, and the implications of the CAT judgment in the context of developing merger clearance doctrines.

The article is relevant to those looking at the thorny questions surrounding international merger clearance work in technology markets, especially following Microsoft/Activision, Adobe/Figma and the merger-based intervention into OpenAI.

It is a particular pleasure that the article was co-authored with a graduate of Stephen’s competition law class, Joseph Day.

The article is available via Edward Elgar journals.

When less is more: Targeted advertising regulation, New York-style

Is targeted advertising creepier than Sleepy Hollow on Halloween? It depends on what data is used to deliver the advert. Few will mind an advert that uses anonymous data from internet activity, especially if it relates to something innocuous – say, a holiday or a sweater. It is quite different when the data used for such advertising relates to a specific individual’s sensitive information such as their health conditions. While it might be possible to cross-correlate on likely health conditions – and there might even be instances where this is useful – there is a strong argument that using sensitive data should only be undertaken subject to consent.

Addressing specific concerns

As with so much targeted advertising regulation, the issue then becomes: how to avoid banning everything, just to prevent a specific abuse. On this, the New York legislature has adopted an interesting new law by passing fiscal bill A.3007C/S.4007.

The new law bans a specific use: adverts cannot be delivered using an individual’s health-care related geolocation data. Significantly, that means that decisions to deliver ads not based on an individual’s health-related geolocation are not affected. For example, New Yorkers should not expect to interact with an ad-free internet as soon as they step into a pharmacy. Moreover, a responsible advertising system can still strip out the sensitive use, without losing other, innocuous but valuable insights. It just becomes illegal to use the data raising concerns for ad delivery and building profiles of individual consumers from this data. Most significantly of all, the law provides clarity around a specific boundary of acceptable use. There are none of the fuzzy boundaries seen elsewhere, most notably in the EU GDPR and its vague and cross-cutting definitions.

In this, the law is actually a microcosm of a wider pattern of different approaches to regulation. Historically, common law jurisdictions – such as most jurisdictions within the US and UK — take the position that all commercial activity is permitted unless banned, as in the ban on specific uses of an individual’s geolocation in the new New York law. By contrast, the EU GDPR reflects a continental European tradition in which regulators are empowered to promote the greater good – as they see it – subject only to light touch legal review for clear cases of error. The New York law seems much preferable because it provides clear boundaries, rather than empowering a technocratic elite on a discretionary basis.

What does this mean for any future federal privacy law?

If there is ever a US federal privacy law, it will be important to see whether it tracks to this common law tradition that offers more practical clearcut guidance on businesses’ acceptable and unacceptable uses of data. FTC privacy enforcement to date has developed from particular cases, which helps to provide a degree of clarity as to the boundaries of use. Some proposals for reform, notably the Klobuchar Bill, have included requirements to define and focus on high-risk use cases. Keeping the approach targeted on particular use cases, and on specifically defined harms, helps to avoid vagueness.

Responsible safeguards are also assisted by prior definition of the issues they are required to address. A specific law allows vendors to develop specific safeguards, and then to use the other, responsibly held data. Here the important dismissal of the FTC’s attempt to go after geolocation data on a scattergun basis in Kochava looms large. There, an Idaho data vendor had used reasonable safeguards to address concerns about geolocation data (e.g., such as filtering all known high risk locations from its data set) – and a federal judge could see grave issues in the FTC pursuing the business despite the reasonable safeguards used.

Back to the Future with Roberson v Rochester

Lawyers with a long memory may remember the classic 1902 case, still often taught in law schools, Roberson v Rochester Folding Box Co. There, the New York Court of Appeals found no right to control the use of Roberson’s image in an advertising campaign — prompting swift legal reform providing exactly this right, but on a specific and targeted basis. 121 years on, New York finds itself once again setting out a stall for a common law approach, in which specific and targeted action is favored over broad bureaucratic empowerments, and other, non-harmful business practices are left undisturbed.

Early personally Identifiable Information: 17-year-old Abigail Roberson on a Franklin Mills Flour advertisement

The Roberson case will also be of interest to those in Europe who question whether the GDPR approach is the right one; not least, in the UK which is currently considering clarifying some of the ambiguity within the GDPR in the Data Protection Bill (No.2). There, it might be time to define the boundaries of acceptable use, providing clarity over relevant harms, so as to address them while leaving other uses — and the value they generate — available. Looking to the New York distinctions for pragmatic guidance, the law prohibits the association of one type of sensitive data (health care) to specific individuals to build profiles or use such data when delivering advertising.

Lessons for the AI era

So, 121 years on from Roberson the same fundamental question arises: what is reasonable in context? What is the list of reasonable concerns, such that they can be addressed? This will be the key architectural question as data laws are updated for the AI era. Seen in this way the 121 years of experience in New York is historic in the best possible sense.

Good cookies and bad cookies: What’s next for online match keys?

What will marketers eat if the third party cookie crumbles (“TPC”)? If using non-cookie based technology, will these match keys also come under threat? There are insights in a recent update report by the UK Competition and Markets Authority (“CMA”). The CMA is reviewing the retirement of the third party cookie from the Google Chrome browser and supervising proposed alternatives which Google claims will provide equivalence (e.g., the Topics, Protected Audience, and Attribution APIs).

This was always going to prove challenging, as there are many helpful uses of identifiers allowing for a personalized web experience and benefits to all involved from tailored systems (e.g., fewer but more relevant adverts driving higher value for content). Yet it was also possible for bad actors to abuse the system through intrusive or distasteful practices (e.g., advertising based on protected categories). The difficult task here is to preserve responsible and valuable data use, and not to throw out the data baby with the bathwater in addressing the bad actors.

This note provides highlights on the state of play and areas for urgent engagement if concerned about the developments.

Testing, Testing, Testing

Unusually, the CMA is involved in reviewing the new Google technologies before they are deployed. The CMA has been at pains to ask for rival third party testing data. Assembling a market-wide picture is difficult. An attempt was made by CalTech researchers to do this in early 2022, by statistically modelling the loss of TPCs across a number of AdTech vendors. Their report pre-dated details of the current Privacy Sandbox proposals and did not therefore model the impact of them. Creating a similar study based on this is critical, and work needs to start very soon to meet Google’s desired Q2 2024 deadline for input.

The key to such a study will be to identify whether the replacements for the TPC allow others to compete with Google. There are serious concerns that this will not be so. For example, a range of niche data uses will be undermined by the loss of rich data sets. As these do not currently cause any apparent harm, but allow much better tailoring of data-driven services and advertising, a significant concern arises that the true reason for data restriction is to undermine competition from Google’s rivals.

Some readers will be familiar with the very interesting 2021 Alibaba study highlighting the extremely high value of personalization:

Source: Alibaba (2021)

As rich data was removed, the top 1000 ranked items received 90% of all exposures (red). Personalization (blue) delivered a much more diverse internet experience, with product views widely distributed according to tastes.

Google’s essential position is that rivals should be denied the broad, rich and deep data needed to compete for the blue portion of the traffic, while retaining significant insights itself, since Google will still be able to see relevant insights for personalization via other routes.

The key to such a study would be to identify areas where the proposals cannot be used to deliver a rich personalization experience, and to identify customer demand for these services (e.g., that of advertisers). This will avoid a scenario where the CMA reviews the Privacy Sandbox and, in the absence of competing evidence, simply gives it the thumbs up.

With the right study, it would be necessary for Google to open up its proposals before they are approved. However, the study must be done. Without it, Google will pick up the ball and run largely unopposed. There are concerning indications in the current report that this is happening:

  1. The CMA’s update table says that despite a growing list of industry concerns the CMA has “no concerns” as of today regarding the future review of equivalence. This refers to the crucial CMA review of whether to allow Google to retire the TPC. Unless opposed, this will undermine a key lever in the review package, which is that Google must show data on equivalence, for others to comment, before the review takes place. Seen in this way, the CMA’s statement of a need for testing data by a deadline (Q2 2024) is a statement of urgent need for alternative evaluations of Google’s proprietary APIs.
  2. There is an odd statement that the purpose of earlier testing was not to show equivalence (in that case, of Topics) or even effectiveness of Google’s new proposals (para. 18, CMA update). However, this is not the focus of the Commitments, which are supposed to look at equivalence of the replacements. It suggests that there is not sufficient information to assess equivalence and thus the CMA does not yet have evidence as to the potential impact on rival stakeholders as required under Google’s Commitments (Commitments, para. 17(c)(5)).
  3. There is a warning shot from the CMA to Google on the need for transparent and fair methodology (para. 19, CMA update). A warning shot is fine and well, but the CMA needs rivals and – especially – affected customers to come in with the cavalry or it will not mean anything.

Are there discriminatory impacts from removing support for open standards from web-enabled software?

The core point in the entire Commitments package is to avoid so-called competitive discrimination. This exists where rivals cannot compete as well because of changes such as the removal of TPCs.

Google has long played a sophisticated game here: by saying that it is losing data as well, it can argue that, nominally, there would be no discrimination. Everyone lost the TPC data, including Google. The glaring issue is that, factually, Google will still have access to significant data sources, including those that it is restricting from rivals (e.g., restrictions on retargeting within Fledge/Protected Audiences and restrictions on cross-site matching of data within its Attribution APIs).

This is why looking at the impacts on Google is a “sleeveless errand”. Such an analysis will likely simply show that just as Google had good data sources before the loss of 3PCs, it also has good data sources afterwards. Unless the CMA can get under the hood (bonnet!) of the whole Googleplex – unrealistic – then it is doubtful that there would be comfort that information on what Google can do will say anything much about what competing vendors can do. There are just too many unknowns.

Moreover, by looking only across rival publisher properties while ignoring the competition of advertising within Google’s Search, YouTube and other 13 properties that it advertises attract more than 500m unique users, a myopic analysis would ignore the distortion to competition of this traffic becoming more valuable because of impaired competition in the open web. This may well result in a shift in spend to search, which would be a significant vertical foreclosure concern as the competition from open web advertising (OpenRTB) is then impaired in order to drive traffic to increasingly valuable, and scarce, search advertising. This significant competitive relationship was recently highlighted by the BDVW (IAB Germany) in its submission to the German competition authority (Bundeskartellamt).

It is essential that the analysis focuses not on Google’s capabilities, but on what others can do. This is the only way to see whether Google is constrained by competition with them, so that users of the technology benefit from a range of rich data-driven products with competitive pricing.

However, experience to date with the CMA bringing issues to Google’s attention has not inspired confidence.  One of the most arbitrary data handling limits in the Privacy Sandbox is first party sets (“FPS”). FPS limits the scope for data handling across a specified set of domains. This makes little sense where low-risk data handling is at play: adverts for sweaters can appear across 100, or 1,000, domains, without any harm.

FPS is, essentially, an answer to a question no one was asking. Rather, Google has asked the question: “Please may I restrict data handling by others?”

Shrewdly, the CMA has pushed back. There is no clear consumer benefit from FPS and every reason to suspect competitive foul play. For several reporting cycles, Google has said it is “evaluating” revisions here. Yet despite the CMA report asking for this to be evaluated yet again (CMA report, para. 32) Google says in its accompanying report that it is “evaluating the numeric limit” – in the sense of the number of domains (p.25, Google Report). It is not considering whether to have any such limit.

No justifiable rationale for this has ever been provided. Instead, Google insists that there must be consumer control over the “plumbing” of the internet, well beyond any reasonable specification of consumer interests or risks of harm. FPS is the core example: what does it matter if an innocuous sporting goods advert is shown across the FPS domain boundary, or not? If there is a concern about some adverts, e.g., those based on sensitive categories, then this does not depend on the domain boundary and is a global property of adverts wherever they appear – including on pure first-party systems. This must raise a suspicion: is the FPS domain boundary not simply a mightily convenient excuse to restrict rivals’ effectiveness by a vendor largely unaffected by it, given Google’s large range of first-party websites and data handling systems? It is unclear why the FPS limit is needed at all.

This highlights the urgent need for engagement: if a rival were to come in with data on the value of the data handling proposed to be restricted, similar to the Alibaba or CalTech studies but updated for the Privacy Sandbox, it would provide ammunition in the fight against arbitrary data handling restrictions. Without it, the ball just keeps rolling downhill despite the growing list of concerns Google is publishing in each of its quarterly reports.

You should have come to the first party

The FPS experience is part of a wider debate about so-called “first-” and “third-party” data use. This is the argument that a direct relationship with the consumer is required for consent-based data handling to be valid (that is, a first party relationship).

This will be a familiar concept to fans of the silver screen: who could forget Groucho and Chico Marx arguing over “the party in the first part” and whether they should be called “the party in the first part” in A Night at the Opera? Truly there is nothing new under the sun.

In the movie, Groucho memorably says: “You should have come to the first party.” So it is with proposals relating to data handling. There is an argument that only first-party data should be used and that it should only be combined across those with a direct customer consent. However, such a world would lose significant insights from data combinations, even if there is no harm from using the data, and considerable benefit from improved access to business-facing solution providers that can help smaller businesses compete with vertically integrated rivals.

From the consumer point of view, this restriction seems as arbitrary as Groucho and Chico’s negotiation. What does it matter if tiny small print enables five newspapers to combine data processing, or if, absent consent, de-identified data is used to create insights across more vendors? The issue for the consumer is whether there is harm, and to preserve the indirect benefit from free content. It is well documented that rich third party data sets add more value. No fewer than five studies from 2011-2020 – including one from the CMA itself – found a range of 50-70% marginal value from having access to interoperable match keys. So, if the use is harmless (sweater advert) then the consumer will lose out, indirectly, from worse advertising. Lower value advertising in turn means more adverts per piece of content and less resource for publishers.

However, some publishers benefit from a data poverty scenario. This is because they have relatively strong brands and compete with the automated, interoperable data-rich systems. Some have woken up to the possibility that large investments in first-party data systems will now have to compete with Google’s proposal for synthesised third-party data handling known as Topics.

Topics is, as the name implies, a means by which websites are coded by topic. There are some significant developments here:

  1. Topics is seen not to be adequate for some uses. Google is reported to have abandoned a Topics API classifier which would have used web address information. This leaves proposals between a rock and a hard place: website data is needed to encode properly, but it is being restricted from rivals.
  2. Publishers wishing to move to first-party data complained loudly that Topics amounts to unfair competition, as it provides richer data than first-party systems. In a sense, it makes third party data into a monopoly – that using the Topics API. Those wishing for data poverty would say that is unfair as the first-party systems become less attractive in consequence. In pushing back, Google appears to have made a major concession: the ability to combine data across websites is “highly valuable” and that if it devalues first-party systems then this is simply par for the course (p. 7, Google Report). The obvious, unanswered, question is why then Google’s Topics API should be the only one to do it. Essentially, Google has admitted that third-party data handling is “highly valuable” and that, in competition terms, it sits in its own relevant market. This is a major, and quite possibly inadvertent, concession arising from an unrelated bun fight with publishers.
  3. Those wishing to use third-party data should note this opening and provide examples of the marginal value of their own third-party data use so as to avoid a Google monopoly on – in Google’s words – a “highly valuable” asset. This is especially so as Google has now conceded that it is possible to combine first-party data with Topics (p.8, Google Report): why not allow third-party combinations, as well? No reason is given.
  4. Publishers voiced concerns about the loss of control over how sites are coded (p.9, Google Report). Google’s report cannot be faulted for a lack of gumption: it says that (a) a misclassified website can always sell contextual adverts and (b) that if there is misclassification it will average out across different websites (!). There are obvious concerns, especially for high quality publishers, from the loss of input into how advertising on their sites works.

Where is the party?

Where processing takes place is a major practical question. Early Privacy Sandbox proposals had this on-device, which could limit competition by preventing the use of competing systems on remote servers — not to mention harming consumers by diminishing battery life. No reason was ever given why consumers needed advertising processing to take place on their phones.

The Protected Audience API has opened this to a degree of off-device processing, but only via two approved vendors: Amazon Web Services and… Google! It is time for your best Claude Rains impression – I am shocked, shocked to find that processing is allowed at these two big tech providers! Is this a not-so-subtle message to Microsoft to align with Google here, to allow Azure to be blessed as well?

The reason given for this continuing restriction is very weak: Google argues that it would be necessary to visit every server farm to verify on-premises security (para. 28, CMA Report; Google Report, p.10). But this is just a general property of web servers. It is not a principled reason to prevent the use of competing servers.

Essentially, the proposal is to tie the Privacy Sandbox to certain cloud providers. It is doubtful that this is legal under competition law principles on technological tying. The concern about premises visits also contrasts with Google’s thematic position on the Privacy Sandbox, which is that it is only providing APIs and then letting others do as they wish with them.

Beyond the concern about server location, there is also a significant concern about interoperation of data sets. It was pointed out to Google — although, perhaps, not flagged to the CMA as the concern is not in the CMA report — that there is no clear opt in signal in the Protected Audience API. This contrasts with industry initiatives to include preference signals, e.g., IAB’s GPP and MSPA proposals.

This leaves the Protected Audience as an island of data that cannot be used with other systems. It will be important for competing data users to pick up this thread with the CMA so as to address this important aspect of interoperability in its next Report in response to the CMA’s invitation for feedback on the Protected Audience API.

Other unexplained restrictions

Several other restrictions call for engagement by those affected:

  • Cross device data use – This is said to be a privacy concern (Google Report, p.14) – but there are many examples of useful cross-device handling. Any synced login service provides this, and it is sometimes even a selling point (Apple iPhone, iPad and Macbook integration springs to mind). As with one-device use, the question is always whether there is harm. The lack of engagement from Google on important cross-device use cases is concerning considering significant benefits (e.g., desktop search for restaurant and phone advert for same cuisine when out and about).
  • Bounce tracking — This is a very significant restriction as it prevents rivals using URLs to identify data points. It is equivalent to many other restrictions, and the restrictions ought to be analysed on an equivalent basis. Yet there is little analysis to date.
  • Aggregation Service — Google is launching a “safe” attribution reporting service, in competition with other vendors. This is welcome, but not on the basis of arbitrary restriction of data to those other vendors. Time delay has been decreased in the report to a 0-10 minute delay, but real time data is still restricted from rivals ad solutions (Google Report, p.20) – and with that restriction, interoperation becomes very difficult. Competing providers of attribution, of whom there are many, should step forward to point out that data restrictions unduly prevent competition with Google over the Aggregation Service.

Who else is at the party?

An important question for any engagement with regulators is who else is speaking up. The CMA Report provides interesting insights:

  • Advertisers are now speaking up. They are voicing concerns about service niche content; about Attribution Reporting not aligning with other measurement, and about the loss of measurement of reach across platforms and devices (para. 47, CMA Report). These are all crucial advertiser technologies and a major focus of commercial activities, e.g., audience analysis. Indeed, that is the original attribution service: a Nielsen panel! Google simply says it is “exploring features” (p.22, Google Report). The CMA should not permit another “dog ate homework” response – but to get there, advertisers will need to speak up more and provide concrete evidence of harm and proposals for mitigation (e.g., abolition of cross-site and cross-device restrictions).
  • SSPs have spoken up about major concerns: the loss of frequency caps, time and cost of API implementation, self-preference for Google Ad Manager, and — most significantly of all — the loss of interoperation of openRTB (para. 48, CMA Report). As with advertisers, the key will be evidence and proposals for changes will be key.
  • Non-Google cookie successor providers: Some data restrictions would harm responsible high-quality data handling systems innovation, including alternatives to the Privacy Sandbox. A clear solution here would be allowing any system meeting objective criteria to work in Chrome. Now is the time to speak up, before the horse has left the stable.


There is a concern that Google does not always make the significance of the Commitments clear despite obligations to do so in the Commitments package. It would be helpful to have more pushback here from the CMA, not least, as otherwise the Reporting could be seen to bless Google’s approach in any review.

Significantly, there is an innovation in the latest Google report: about half the Report now provides significant detail about what Google has done. This may well be designed to provide points to defend TPC withdrawal in the event that the CMA and Google disagree and the matter goes to court. Some very significant points are hiding in plain sight, e.g., providers of alternatives to the Privacy Sandbox argued that they are foreclosed. Google says it “welcomes efforts to develop alternatives” but that it “will always keep in mind the privacy, safety, and security of its users” (p.38, Google Report).

Unless this is challenged, it could be taken to have given the CMA notice that Google regards alternatives as — for some specified reason — not private, unsafe, or insecure. This is untrue even today – a sweater advert on OpenRTB is hardly “insecure” in any meaningful sense — so why the prejudicial statement? There is a need for comment against these sneaky leading statements, especially where they lack an evidence base.


  • It is a critical time to engage with the process surrounding Third Party Cookie withdrawal.
  • This needs to come in the form of quantitative tests, to ensure that the CMA has the full picture about the Privacy Sandbox and its impacts.  If there is concern about the costs to individual firms of testing, this can be done on a cross-industry basis using a shared expert report.
  • This can be provided with full whistleblower protections, preventing reprisals.
  • The most critical element is expert reporting on the commercial impact of the loss of rich data sets and the impact this will have on serving customers.

However, if the work is not done, then the Privacy Sandbox will become a reality through simple inertia – and a precedent will be set for withdrawing other identifier-based technologies, notably the Android MAID.

No More Kings? Due Process and Regulation Without Representation Under the UK Competition Bill

Reposted from invited submission to Truth on the Market.

What should a competition law for the 21st century look like? This point is debated across many jurisdictions. The Digital Markets, Competition, and Consumers Bill (DMCC) would change UK competition law’s approach to large platforms. The bill’s core point is to place the UK Competition and Markets Authority’s (CMA) Digital Markets Unit (DMU) on a statutory footing with relaxed evidentiary standards to regulate so-called “Big Tech” firms more easily. This piece considers some areas to watch as debate regarding the bill unfolds.

Evidence Standards

Since the Magna Carta, the question of evidence for government action has been at the heart of regulation. In that case, of course, a jury of peers decided what the government can do. What is the equivalent rule under the DMCC?

The bill contains a judicial-review standard for challenges to DMU evidence. This amounts to a hands-off approach, and is philosophically quite some distance from the field in Runnymede where King John signed Magna Carta. It is, instead, the social-democratic view that an elite of regulators ought to be empowered for the perceived greater good, subject only to checking that they are within the scope of empowerments and that there is a rational connection between those powers and the decision made. There is, in other words, no jury of peers. On the contrary, there is a panel of experts. And on this view, experts decide what policy to pursue and weigh up the evidence for regulation.

This would be wonderful in a world where everyone could always be trusted. But there are risks in this generosity, as it would also allow even quite weak evidence to prevail. For every Queen Elizabeth II, there is a King John. What happens if a future King John takes over a DMU case? Could those affected by weak evidence standards, or capricious interpretations, push back?

That will not be easy. The risk derives from the classic Wednesbury case, which is the starting point for judicial review of agency action in the UK. The case has similarities to Chevron review in the United States, but without the subsequent developments like the analysis of whether policy is properly promulgated by the agencies, following West Virginia v EPA.

Wednesbury requires a determination to be proven irrational before a court can overturn it. This is a very high bar and amounts to only a sanity test. Black cannot be white, but all shades of grey must be accepted by the court, even if the evidence points strongly against the interpretation. For example, consider the question: is there daylight? There is a great difference between an overcast day, and a sunny day, and among early dawn, midday, and late dusk. Yet on a Wednesbury approach, even the latest daylight hour of the darkest day must be called “sunlight” as, yes, there is daylight. It is essentially a tick-box approach. It trusts the regulator completely on policy: in this case, what counts as bright enough to be called daylight.

At some level, this posture barely trusts the courts at all. It thus foregoes major checks and balances that can helpfully come from the courts. It is myopic, in that sometimes a fresh and neutral pair of eyes is important to ensure sensible, reasonable, and effective approaches. All of us have sometimes focused on a tree and not seen the forest. It can be helpful for a concerned friend to tell us that, provided that the friend is fair, reasonable, and makes the comment based on evidence—and gives us a chance to defend our decision to look only at particular trees.

There has been no suggestion that this fair play is lacking from UK courts, so the bill’s hostility to the tribunal’s role is puzzling. Surely, the DMCC’s intention is not to say: leave me alone when you think I am going wrong?

This has already been criticised in influential commentary, e.g., Florian Mueller’s influential FOSS Patents blog post on the CMA’s recent decision to block the merger of Microsoft and Activision. It is the core reason for the active positions in both the Activision case and the earlier Meta/Giphy case in which, despite a CMA loss on procedural aspects, all policy grounds and evidentiary interpretation withstood challenge.

This will have major implications for worldwide deals and worldwide business practices, not least as it could effectively displace decisions by other jurisdictions to assess evidence more closely, or not to regulate certain aspects of conduct.

There is also the important point that courts’ ability to review evidence has sometimes been very positive. In a nutshell, if the case for regulation is strong, then there should be no problem in the review of evidence by a neutral third party. This can be seen in the leading case on appeal standards in UK telecoms regulation, BT and CityFibre v Ofcom, which—prior to the move to judicial review for such cases—involved deregulation to help encourage innovation in regional business centers (Leeds, Manchester, Birmingham, etc.).

Overreach by Ofcom—in the form of a predatory low-price cap—was holding back regional business development, because it was not profitable to invest in higher value but also higher price next-generation communications systems. This was overturned because of the use of an appeal standard pointing out errors in the evidence base; notably, a requirement for there to be as many as five rivals in an area before it was to be considered competitive, which simply contradicted relevant customer evidence. It is very unlikely that this helpful result would have obtained had the matter been one for hands-off judicial review.

Evidence Framework

Closely related to the first point on judicial review is the question of affirmative evidence standards. Even under a judicial-review standard, the DMU must still apply the factors in the bill. There are significant framings of evidence in the DMCC.

The designation process

This emphasises scale. A worry here might be that scale alone displaces the analysis of affirmative evidence—i.e., “big is bad” analysis. What if, as in the title of the recent provocative book, sometimes Big is Beautiful? That thought seems to be lacking from bill (see s.6(1)(a)). As there is a scenario where companies are large, but still competitively constrained, it would be helpful to consider the consumer impacts at the designation stage. There is no business regulating a company just because it is large if the outcomes are good.

The framing of the countervailing benefit exemption

The bill seeks to provide voice to consumer impacts in its approach to conduct regulation, but the bar is set high. There must be proof of indispensable conduct required for, and proportionate to, a consumer benefit, under s.29(2)(c).

This reverses the burden of proof; companies must prove that they benefit consumers. Normally, this is simply left to the market, unless there is market power. You and I buy products in the marketplace, and this is how consumer benefit is assessed.

In a scenario where this cannot be proven, s.20 would allow conduct orders to require “fair and reasonable terms” (s.20(2)(a)). It does not say to whom or according to whom. This risks allowing the DMU to require reasonable treatment of other businesses, unless the defendant company can prove that consumers benefit. There are strong arguments that this risks harming consumers for the sake of less-efficient competitors.

Consumer evidence

S.44(2) allows, but certainly does not mandate, considering consumer benefits before imposing a pro-competition intervention (PCI). Under s.49(1), such a PCI would have the sweeping market investigation powers in Schedule 8 of the Enterprise Act 2002, which extend to rewriting contracts (Sch 8, rule 2), setting prices (Sch 8, rules 7 and 8) or even to even breaking up companies (Sch 8, rules 12 and 13). It is therefore essential that the evidence base be specified more precisely. There must be a clear link back to the concern that gave rise to the PCI and why the PCI would improve it. There is reference to the ability to test remedies in s.49(3) and (4), but this is not mandatory. Without stronger evidentiary requirements, the PCIs risk discretionary government control over large companies.

Given the breadth of these powers, it would be helpful to require affirmative evidence in relation to asserted entry barriers and competitive foreclosure. If there is truly a desire to dilute the current evidence standards, then what remains could still be specified. Not specifically requiring evidence of impacts on entry and foreclosure, as in the current proposal, is unwise.

Prohibited Conduct

The contemplated codes of conduct could have far-reaching consequences. Risks include inadvertent prohibitions on the development of new products and requirements to stop product development where there is an impact on rivals. See especially s.20(3)(b) (own-product preference), and (h) (impeding switching to others), which arguably could encompass even pro-competitive product integration. There is an acute need for clarification here, as product development and product integration frequently affect rivals, but it is also important for consumers and other innovative businesses.

It is risky to use overly broad definitions here (e.g., “restricting interoperability”) without saying more about what makes for stronger or weaker cases for interoperation (both scenarios exist). Interoperability is important, but evidence relating to it would benefit from definition. Otherwise:

  • Bill s.20(3)(e) could well capture improvements to product design;
  • Weasel words like “unfair” use of data (s.20(3)(g)) and “users’… own best interests [according to the DMU]” (s.20(2)(e)) are ripe for rent-seeking; and
  • A particular risk with the concept of “using data unfairly” in s.20(3)(g) is that it could be abused to intervene in data-driven markets on an unprincipled basis.

For example, the data provision could easily be used to hobble ad-funded business models that compete with legacy providers. There are tensions here with the stated aim of the legislative consultation, which was to promote, and not to inhibit, innovation.

A simple remediation here would be to apply a balance-of-evidence test reading on consumer impact, as currently happens with “grey list” consumer-protection matters: the worst risks are “blacklisted” (e.g., excluding liability for death) but more equivocal practices (hidden terms, etc.) are “grey listed.” They are illegal, but only where shown, on balance, to be harmful. That simple change would address many of the evidence concerns, as the structure for evidence weighing would be clarified.

Process protections

The multi-phase due-process protections of the mergers and market-investigations regimes are notably lacking from the conduct and PCI frameworks. For example, a merger matter uses different teams and different timeframes for the initial and final determinations of whether a merger can proceed.

This absence is no surprise, as a major reform elsewhere in the DMCC is to overturn the Competition Appeal Tribunal decision in Apple v CMA, where the CMA had not applied market-investigation timing requirements as interpreted by the Competition Appeal Tribunal, and thus failed statutory timing requirements. The time limits there are designed to prevent multiple bites of the cherry and to prevent strategic use of protracted threats of investigation.

The bill would allow the CMA more flexibility than under the existing market-investigation regime. Is the CMA really asking to change the law, having failed to abide by due-process requirements in an existing one? That would be a bit like asking for a new chair, having refused to sit on a vacant chair right in front of you. Unless this is clarified, the proposal could be misread as a due-process exemption, precisely because the DMU does not want to give due process.

The DMCC’s proponents will argue that the designation process provides timeframes and a first phase element in the cases of “strategic market status” (SMS) firms, with conduct and PCI regulation to follow only if a designation occurs. This, however, overlooks a crucial element: the designation process is effectively a bill of attainder, aimed at particular companies. Where, then, are the due-process rights for those affected? Logically, protections should therefore exceed those in the Enterprise Act market-investigation setting, as those are marketwide, whereas DMU action is aimed at particular firms.

A very sensible check and balance here would be for the DMU to have to make a recommendation for another CMA team to review, as is common in merger-clearance matters.

Benchmarking and Reviews

The proposal contains requirements for review (e.g., s.35 on review of conduct enforcement). The requirements are, however, relatively weak. They amount to an in-house review with no clear framework. There is a very strong argument for a different body to review the work and to prevent mission creep. This may even be welcome to the DMU, as it outsources review work.

The standard for review (e.g., benefits to end users) ought to be clearly specified. The vague reference to “effectiveness” is not this, and has more in common with EU law (e.g.Toshiba) where “effectiveness” of regulation is determined chiefly by the state, and not by the law. (The holding in Toshiba being that of several interpretations, the state is entitled to the most “effective” one, according to… the state.) To the extent that one hopes that the common law regulatory tradition differs, it is puzzling to see the persistence of this statist approach following UK independence from the EU. Entick v Carrington, the DMCC is not.

Other important benchmarking includes reviews of the work of other jurisdictions. For example, the DMU ought not to be given powers that exceed those of EU regulators. Yet arguably, the current proposal does exactly this by omitting some of the structured evidence points in the EU’s Digital Markets Act. There is also a need to ensure international-comity considerations are given due weight, given the broad jurisdictional tests (s.4: UK users, business, or effect). Others—including, notably, jurisdictions from which the largest companies originate—may make different decisions to regulate or not to regulate.

In the case of UK-U.S. relationship, there have been some historic disagreements to this effect. For example, is the DMU really to be the George III of the 21st century, telling U.S. business what to do from across the sea? It is doubtful that this is intended, yet some of the commitments packages already have worldwide effect. Some in America might just say: “No more kings!”

Those with a long memory will remember how strenuously the UK government pushed back on perceived U.S. overreach the other way, notably in the Freddie Laker v British Airways antitrust litigation of the 1980s, and in the 1990s, in the amicus brief submitted by the UK government in Hartford Fire Insurance v California—at the U.S. Supreme Court, no less. It is surely not intended that the UK objected to de facto U.S. and Californian regulation of Lloyds of London, yet wishes to regulate U.S. tech giants on a de facto worldwide basis under UK law?

Public opinion will not take kindly to that type of inconsistency. To the extent that Parliament does not intend worldwide regulation—a sort of British Empire of Big Tech regulation—the extent of the powers ought to be clarified. Indeed, attempting worldwide regulation would very predictably fail (e.g., arms races in regulation between the DMU and EU Commission). An EU-UK regulation race would help nobody, and it can still be avoided by attention to constructive comity considerations.

As the DMCC makes its way through parliamentary committees, those with views on these points will have an excellent opportunity to make themselves known, just as the CMA has done in recent global deals.

Computer says: Yes! – AI Industry wants to be regulated … But not too much

Will there be AI regulation, and if so, in what form?

On May 16th, 2023, the first senate hearing on Artificial Intelligence (AI) oversight was held during which there seemed to be a clear consensus among industry participants that some form of regulation of AI would be desirable. A few days later the leaders of the G7 included AI governance and interoperability in their final communique, showing that AI is firmly on the agenda across the globe. All this talk of regulation of a relatively new industry, which seems disruptive but of which the true effects are yet unknown (and difficult to anticipate and regulate), thus seems all the more remarkable. As a few Senators noted during the hearing, this consensus by industry itself that regulation is needed is unique. This raises the question: why is there such a strong urge to regulate?

At the hearing, the AI industry was represented by the CEO of OpenAI (Sam Altman), the Chief Privacy & Trust Officer of IBM (Christine Montgomery) and noted AI expert prof. Gary Marcus. In their testimony and ensuing discussions there was a specific focus on concerns about:

  1. AI capabilities (i.e. election manipulation, misinformation, etc.);
  2. privacy and copyright (i.e. how the AI models are trained and what data is used); and
  3. how AI can be regulated as so little is known and understood about its effects.

In addition, there was wide agreement from the side of the legislators that the (perceived) mistakes that were made regarding the regulation of social media platforms (i.e. the Communication Decency Act Section 230) must be avoided. This sentiment was echoed by all industry participants.

Manipulation and Election Interference

There was great concern about the capability of AI to manipulate users and potentially interfere with elections through its ability to easily produce swaths of misinformation (or as OpenAI put it, “photoshop on steroids”). All participants agreed that regulation on this issue would be needed in combination with better education of the public.

With regard to the issue of manipulation, Prof. Marcus noted that because so much is unclear about what these models do and what data is used (e.g. the opaqueness means that there is a risk of bias in the data sets), combined with the ability of these systems to shape beliefs and perceptions, it would be too dangerous to leave control over such capabilities to a few for profit companies. The temptation to revert to some form of commercial exploitation would simply be too great.

Privacy and Copyright

The protection of personal data and copyright was discussed in relation to the data sets that need to be used by the AI models. There was some consensus that a new privacy law might be needed to better regulate the data that is being used by the models.

All industry participants also seemed to agree that people have a right to their own data – if truly personal – and should be able to exclude themselves from being included in any data sets to the extent that personal data is used. Similarly, all industry participants agreed that all creators should be rewarded for their work, though how such a system could be put in place remained unclear, as did the boundary with fair and transformative use cases.

Regulations, Licensing schemes and an AI Regulator

The majority of the hearing was spent on future directions of potential regulation. IBM noted that regulations will be key for the creation of public trust in the technology. It would like to see regulations that are focused on points where the technology meets the end-user and should emphasize issues such as transparency (i.e. disclosure of the model, the governance of the model and the data that was used), and accountability. Such regulations should also make a distinction based on use cases, in IBM’s vie so that “high risk” activities are regulated stricter than “low risk” activities. OpenAI agreed that providing transparency (such as the values of the model) is important, but noted that regulations should not become unduly burdensome on smaller scale companies. It proposed a distinction for regulatory burdens based on capability of a model and computing power.

This is a significant divergence, as the true meaning of transparency, consent, and control are not yet settled, and visions of the balance between interoperability and end-user disclosure differ. The same is true of consent and the relationship between transparency and consent.

All industry participants agreed that “nutrition labeling” (i.e. providing basic information to the consumer on the model, the data and other relevant issues) could be a good way to gain the trust of the end-user, which moved the discussion on to how oversight could best take place, and whether the current regulators are capable of providing such oversight.

Prof. Marcus noted that while the FCC could perform some of the work that is required, but would still fall short of what is needed to induce a level of trust that is required. As such he suggested international cooperation on AI standards if only so that companies would not need to adjust their models for each country separately. OpenAI agreed and suggested that a global body modeled after the IAEA should be created. Only IBM disagreed, feeling that the current regulators are quite capable of effective oversight and cited the EU AI Act as a regulatory system that governs for different risks without unduly burdening the industry or creating a new regulator.

On what powers any such AI agency should have, OpenAI and Prof. Marcus agreed that it should have a clear remit to provide oversight on: (i) critical capabilities (e.g. persuasion, manipulation, influence, etc.); and (ii) the administration of an industry licensing system (with the power to revoke licenses). It would be up to the agency to determine what capabilities and scale are required for oversight. This is relevant as developments at the most sophisticated end are limited to only a few companies of scale (due to resources and cost). This, as OpenAI noted, has some benefit to a regulator as it limits oversight to only a few established companies.

Subsequent Developments

Subsequent to the hearing, developments regarding the regulation of AI have moved quickly. At the G-7 in Hiroshima, it was noted how rules governing technologies such as AI had not kept pace with reality. Issues of misinformation through the use of AI were cited as a possible concern (albeit without quite being defined) and the creation of greater trust in AI was said to be needed. In order to have some alignment among members on norms and standards the “Hiroshima AI process” was established (for which officials of the G-7 will meet for the first time at the end of May).

Not long after that, the world got its first taste of how quickly AI generated misinformation can spawn, spread, and cause damage. On 23 May, an AI generated fake image of an explosion at the Pentagon surfaced on a verified Twitter account. Within no time $500 billion had disappeared from the US stock market, giving us a foretaste of what disruption AI-generated misinformation can cause.

A few days after that the President of Microsoft (Brad Smith) gave Microsoft’s view on AI regulation. He felt AI regulation would be needed and proposed measures such as: (i) a licensing requirement for highly capable AI; (ii) a licensing requirement for AI data centers; (iii) labeling for AI created content; and (iv) safety brakes for AI in critical infrastructure. All these proposals should help to induce trust in the technology.

Meanwhile, OpenAI released a blog posting again reiterating its view that a global regulator is needed. The EU Commission in the meantime had a meeting with the CEO of Alphabet (Sundar Pichai) to discuss their intention of concluding a voluntary AI-pact. This voluntary pact would serve as a stop-gap until further regulations come in and multilateral guardrails are in place.

A few days after that IBM released its view on, as it had previously stated, why a new regulator would be superfluous. It stated that instead, every agency should be made AI ready and be supported by appropriate legislation and budgets.


Although there is a lot of talk of regulation of AI, much remains uncertain. For starters, the mere definition of AI is still unclear in most jurisdictions. The same is true of core concepts including personal data, transparency, and consent. As the short overview above demonstrates, the developments of a mere week could practically fill a book. For the moment the EU AI Act is the only concrete proposal for legislation, but whether convergence with or divergence from the EU model is desirable – and if so, desired – remains to be seen.

But the main issue remains that the technology is so new and changing so quickly that it is questionable whether specific laws at this stage are able to keep up (the EU AI Act has already needed some adjustments to accommodate some of the newest developments). As Prof. Marcus noted during the hearing, the tools that are required for oversight of this technology have not even been invented yet, so what chance of real oversight is there?

There is a notable emphasis on the next step of development being richer and better data sets to train the models even better. Data is the fuel of this industry. The EU and US know full well that this means they have a key role to play in the way in which AI can be managed. Stricter privacy laws and copyright laws can have onerous effects on the growth of the AI companies but may at the same time be necessary to facilitate the trust that the AI companies require in order for them to obtain such data sets. There is work to be done on exactly what these laws mean in the context of AI, and how to define them suitably for the new technologies. Certainly, issues with the last generation of technology, such as the difficulty in deciphering the precise meaning of GDPR, ought to be avoided.

For the moment the most established models are in the hands of the traditional big tech companies, i.e. OpenAI / Microsoft, Google and Meta. Although there are some challengers on the market (most notably the open-source AI Anthropic – though it is backed by Google) it remains to be seen how effectively these models can compete. Time will tell. Whereas a few months ago there seemed to be a set belief that LLMs would be too expensive and cumbersome to run for smaller companies, the developments around LORA seem to disprove this. There, a team of researchers built a model similar to ChatGPT for a mere $600 – yes, you read that correctly, $600, not $600m! This ability to deploy competing solutions shows the great promise of the technology, if applied under a suitable risk-based and evidence-based framework: Web 2.0 is replaced by a decentralized Web 3.0 model based on neutral risk governance principles and no gatekeepers. As such, a cynic would note that this push for regulation might well be a play by the more established tech companies at regulatory capture, creating such huge barriers to entry that it becomes impossible to for upstarts to compete.

When one assesses the sincerity of this desire to be regulated, context is important. OpenAI is, at the moment, a market leader with the backing of Microsoft. Its CEO, whilst on his tour through Europe, was quick to mention that if the regulatory environment got too onerous in the EU he would consider withdrawing his operations in its entirety, only to come back from this statement a few days later. Similarly, IBM keeps insisting on regulations where the technology meets the end-user, when it, being an enterprise services provider, has no contact with end-users at all. So a healthy dose of skepticism is perhaps appropriate.

This means that regulators must keep an equally open mind. Whereas OpenAI noted during the hearing that there may be a benefit for a regulator regulate just a handful of operational models, this reality could very well change very quickly. With so much uncertainty remaining, perhaps the best advice comes from Prof. Emily Bender who said that instead of coming up with new laws and agencies, perhaps enforcers should consider better defining out existing vagueness in existing laws, rather than reinventing the wheel.

Jeff Senduk is Special Counsel to Dnes & Felver, where he focuses on the legal responses to emergent technologies.

* * * * *