When less is more: Targeted advertising regulation, New York-style

Is targeted advertising creepier than Sleepy Hollow on Halloween? It depends on what data is used to deliver the advert. Few will mind an advert that uses anonymous data from internet activity, especially if it relates to something innocuous – say, a holiday or a sweater. It is quite different when the data used for such advertising relates to a specific individual’s sensitive information such as their health conditions. While it might be possible to cross-correlate on likely health conditions – and there might even be instances where this is useful – there is a strong argument that using sensitive data should only be undertaken subject to consent.

Addressing specific concerns

As with so much targeted advertising regulation, the issue then becomes: how to avoid banning everything, just to prevent a specific abuse. On this, the New York legislature has adopted an interesting new law by passing fiscal bill A.3007C/S.4007.

The new law bans a specific use: adverts cannot be delivered using an individual’s health-care related geolocation data. Significantly, that means that decisions to deliver ads not based on an individual’s health-related geolocation are not affected. For example, New Yorkers should not expect to interact with an ad-free internet as soon as they step into a pharmacy. Moreover, a responsible advertising system can still strip out the sensitive use, without losing other, innocuous but valuable insights. It just becomes illegal to use the data raising concerns for ad delivery and building profiles of individual consumers from this data. Most significantly of all, the law provides clarity around a specific boundary of acceptable use. There are none of the fuzzy boundaries seen elsewhere, most notably in the EU GDPR and its vague and cross-cutting definitions.

In this, the law is actually a microcosm of a wider pattern of different approaches to regulation. Historically, common law jurisdictions – such as most jurisdictions within the US and UK — take the position that all commercial activity is permitted unless banned, as in the ban on specific uses of an individual’s geolocation in the new New York law. By contrast, the EU GDPR reflects a continental European tradition in which regulators are empowered to promote the greater good – as they see it – subject only to light touch legal review for clear cases of error. The New York law seems much preferable because it provides clear boundaries, rather than empowering a technocratic elite on a discretionary basis.

What does this mean for any future federal privacy law?

If there is ever a US federal privacy law, it will be important to see whether it tracks to this common law tradition that offers more practical clearcut guidance on businesses’ acceptable and unacceptable uses of data. FTC privacy enforcement to date has developed from particular cases, which helps to provide a degree of clarity as to the boundaries of use. Some proposals for reform, notably the Klobuchar Bill, have included requirements to define and focus on high-risk use cases. Keeping the approach targeted on particular use cases, and on specifically defined harms, helps to avoid vagueness.

Responsible safeguards are also assisted by prior definition of the issues they are required to address. A specific law allows vendors to develop specific safeguards, and then to use the other, responsibly held data. Here the important dismissal of the FTC’s attempt to go after geolocation data on a scattergun basis in Kochava looms large. There, an Idaho data vendor had used reasonable safeguards to address concerns about geolocation data (e.g., such as filtering all known high risk locations from its data set) – and a federal judge could see grave issues in the FTC pursuing the business despite the reasonable safeguards used.

Back to the Future with Roberson v Rochester

Lawyers with a long memory may remember the classic 1902 case, still often taught in law schools, Roberson v Rochester Folding Box Co. There, the New York Court of Appeals found no right to control the use of Roberson’s image in an advertising campaign — prompting swift legal reform providing exactly this right, but on a specific and targeted basis. 121 years on, New York finds itself once again setting out a stall for a common law approach, in which specific and targeted action is favored over broad bureaucratic empowerments, and other, non-harmful business practices are left undisturbed.

Early personally Identifiable Information: 17-year-old Abigail Roberson on a Franklin Mills Flour advertisement

The Roberson case will also be of interest to those in Europe who question whether the GDPR approach is the right one; not least, in the UK which is currently considering clarifying some of the ambiguity within the GDPR in the Data Protection Bill (No.2). There, it might be time to define the boundaries of acceptable use, providing clarity over relevant harms, so as to address them while leaving other uses — and the value they generate — available. Looking to the New York distinctions for pragmatic guidance, the law prohibits the association of one type of sensitive data (health care) to specific individuals to build profiles or use such data when delivering advertising.

Lessons for the AI era

So, 121 years on from Roberson the same fundamental question arises: what is reasonable in context? What is the list of reasonable concerns, such that they can be addressed? This will be the key architectural question as data laws are updated for the AI era. Seen in this way the 121 years of experience in New York is historic in the best possible sense.

Leave a Reply

Your email address will not be published. Required fields are marked *