Debate: Is Facial Recognition For DAM Worth The Legal Risks Or Not?

Facial Recognition

Earlier this month, we published an article by Martin Wilson about how data protection legislation had made Facial Recognition (FR) too legally fraught to implement for DAM,  David Tenenbaum, who is founder and CEO of vendor, MerlinOne has contributed a rebuttal piece to Martin’s assertions about FR for Digital Asset Management:

Radioactive materials are dangerous, but does that mean we should ban them from any use, even the tightly controlled (and essentially zero risk) systems that use radiation to combat cancer and save lives? We think the same applies to FR in DAM, especially in this time of scarce metadata and high workloads on people. We need to responsibly employ every tool we can get to help people manage and put to use their exploding collections of content!” [Read More]

David’s article is here:

The original by Martin is here:

The thesis of David’s argument is that much of the legislation around unsuitable uses of FR relate to some high profile privacy cases where FR has been abused.  David also presents some examples from supermarkets in the UK where it was employed to help predict if prospective purchasers of alcohol were potentially underage.

To be clear, neither David nor Martin are opposed to the use of FR in DAM and both acknowledge that it offers considerable opportunity as a productivity-enhancing tool (and exactly the kind which DAM excels at).  I agree with both of them on this point and unlike many other implementations of AI in DAM, FR offers a far higher degree of accuracy because the problem domain is far easier to define so the odds of successful recognition shorten (i.e. improve).

The introduction of some relatively strict privacy and data protection legislation has made the use of this technology somewhat more complex, however.  To what extent it has rendered them unusable is the crux of the debate.  The issues are more about liability and risk.  David’s argument is that the risk of litigation is low and therefore the technology should be utilised because it clearly offers benefits which are almost impossible to replace with something else other than expensive human manual effort.  Martin’s point is that there might be risks to both DAM users and vendors resulting from the fact that no insurer will offer Professional Indemnity cover if a case is bought against one or the other involving the use of FR in DAM solutions.

I suspect that David is correct that the risk of a case being bought is quite low.  On the other hand, in the event of that happening (and where a DAM user and/or vendor are defendants) the consequences of losing it might be quite devastating.  Any damages awarded to the plaintiff (not to mention legal costs) would put most DAM vendors out of business.

Ideally, there would be a legal precedent in the form of a test case which would help improve the clarity over the issue of liability.  With that said, I can see how a lot of DAM vendors would be uneasy over being the legal guinea pigs in that scenario.  On the other hand, those who see this risk as negligible (or perhaps even non-existent) could gain commercial advantage over their more cautious peers – and potentially never be challenged over it.

As AI tools become more specialised (and possibly improve also) these kind of conundrums will become ones which DAM vendors and users will be forced to confront. As such, this is a subject that is worthy of further discussion and debate.

Share this Article:

One comment

  • I found this piece particularly interesting as there is always concern and heavy regulation when it comes to using AI as it relates to privacy. In my opinion, I think that AI in facial recognition is a benefit, if one thinks about it in terms of the example of using facial recognition AI in a supermarket to predict if purchasers of alcohol for underage. However, one must think of the privacy concerns that arise without the appropriate consent from consumers. On the other side of things, most people today have a social media presence in this day and age, so not much is private in terms of having your face out there. That is of course not an excuse to take privacy less seriously, however, I do agree with the assessment that it is relatively low risk. A vendor is ultimately responsible for ethically using facial recognition AI and safeguarding the data. It is definitely a gray and tricky area to navigate. If misused or susceptible to breaches, facial recognition can be used adversely and infringe upon the right to anonymity and privacy.

Leave a Reply

Your email address will not be published. Required fields are marked *