The Rise and Demise of Facial Recognition in DAM

This feature article was written by Martin Wilson, Founder of Dash and Director of Bright Interactive.

 

If you ask a DAM vendor to highlight the ROI of their solution you’ll hear a lot about how much time it saves.

They might say something like this: with a properly organised DAM solution, you can find perfect visual content for your new marketing campaign in a fraction of the time it would take using simple cloud storage like Google Drive or Dropbox. This leaves you feeling happy and inspired and with lots of time before your deadline to focus on the creative aspects of your campaign.

They may or may not admit that there’s one stage of the DAM lifecycle that definitely isn’t faster with a DAM solution – when new assets are first added.

Let’s be honest, simply dumping everything into Google Drive doesn’t take long. Whereas, to get the most from a DAM solution, you need to tag and organise your assets to make them searchable.

Most organisations investing in DAM realise this is time well spent, which pays back exponentially downstream when people need to find and use the assets. While the UX design of the better DAM solutions make this process as easy as possible, it still takes time.

The promise of auto-tagging

So it’s no surprise that the DAM industry jumped on the potential of machine learning when technologies such as auto-tagging became available a few years ago.

I wrote an article about the mixed results of auto-tagging for DAM News back in 2016. The technology has improved since then, but it is not yet a replacement for humans.

When considering auto-tagging people tend to think of object tagging, where machine learning systems trained on large volumes of images provide tags relevant to the subjects in the photo (“beach”, “sea”, “ice cream” etc).

Another auto-tagging technology is facial recognition, which uses a combination of machine learning and biometric data to automatically tag an asset with a person’s name.

Facial recognition is accurate

Facial recognition is impressively accurate (up to 99.97% according to research conducted in 2020), and therefore can be a huge time saver in DAM.

For organisations that need to be able to find photos containing a particular person, the functionality can save a lot of time. Ironically, given the privacy concerns related to it, this is often organisations needing to manage subject consent. For example, typically a school or university can only use photos of current students, meaning when someone leaves they have to find all the photos they are in and stop using them. Imagine having to do this manually. Using facial recognition, it takes seconds.

Whenever I demoed our platform to customers and prospects, its facial recognition features were almost guaranteed to provide a “wow” moment.

I now have to look for wows elsewhere, as we recently removed facial recognition capabilities from all our DAM solutions. Let me explain why.

Privacy legislation

The heavy lifting for facial recognition functionality in most software applications is provided by back-end services such as Amazon Rekognition. They work by scanning a photo and generating a unique face identifier for each of the faces it sees. The applications making use of this then link each face identifier to a person’s details (for example their name), which is usually entered by a human.

Each face identifier looks like a set of random characters and, in theory (at least in the case of Amazon Rekognition), is meaningless out of the context of the application for which it was generated. This suggests that as long as the application itself is using the data responsibly, the risks for individuals of their personal data being misused are very low, even in the cases of data breaches.

However some geographic regions consider the face identifier to be biometric data, which is usually classed as special category data. In particular, legislation in many states in the US now requires a user’s consent not just to store biometric data but even before it can be generated. This renders facial recognition functionality unusable for most scenarios – if you can prove that you have the consent of every person in a picture before you scan it then you must have already identified them all.

The UK is not quite this strict. For example, the ICO (Information Commissioner’s Office) has this to say:

If you process digital photographs of individuals, this is not automatically biometric data even if you use it for identification purposes. Although a digital image may allow for identification using physical characteristics, it only becomes biometric data if you carry out “specific technical processing”. Usually this involves using the image data to create an individual digital template or profile, which in turn you use for automated image matching and identification.

This would seem to allow facial recognition to be used if the conditions for processing special category data are met. So it would be ok to scan all photos as long as you only store the face identifier for a person if they have given their consent.

So could DAM vendors offer facial recognition capabilities in some regions and not others? I guess so. But what happens if a US citizen appears in a photo stored in the DAM solution of a UK-based company? It’s a legal minefield, and privacy legislation is only going in one direction at the moment.

As a consumer, that’s reassuring. As a software developer, it feels like the right balance between privacy and convenience would allow for more nuance than simply “facial recognition is bad”.

Ruining it for the rest of us

A couple of years ago laws around facial recognition were pretty unclear, prompting John Oliver to describe what he saw as “the chilling expansion of facial recognition technology”.

Using facial recognition to save time tagging pictures of members of your organisation in images you own is one thing. Scraping the Internet for photos of people, grouping them together using facial recognition software and selling that data to anyone who wants it is another. (As I write this, Clearview AI has just been fined for breaching UK data protection laws).

The laws in most regions are now much more explicit, and have come down on the side of privacy. Have we thrown the baby out with the bathwater? I suspect so. Surely these laws could differentiate between different use cases, allowing responsible software vendors and their customers to realise the undeniable benefits of facial recognition?

Pulling the plug

Perhaps they will in the future. Until then it’s just too risky for DAM vendors (the data processors) and their customers (the data controllers) to make use of facial recognition technology. If you are in any doubt about the risk, bear in mind it’s now pretty much impossible to get professional indemnity insurance that will cover claims relating to facial recognition.

So that’s why we pulled the plug on it.

Do any other DAM vendors continue to offer it? If so, I would welcome a conversation in the comments (or you can contact me privately via our website or LinkedIn – see below) to hear how you are managing to do this legally, and without risk to your own organisation or your customers. Perhaps we’ve missed something and we’re being overly cautious. I don’t think so.

 

This feature article was written by Martin Wilson, Founder of Dash and Director of Bright Interactive.

Share this Article:

2 Comments

  • The statement from this article that is very critical when considering the applicability of the privacy concerns regarding facial recognition to your organisation, is as follows …

    “Using facial recognition to save time tagging pictures of members of your organisation in images you own is one thing. Scraping the Internet for photos of people, grouping them together using facial recognition software and selling that data to anyone who wants it is another.”

    In my experience almost all intended usage of facial recognition falls into the first intended usage. That is to say – facial recognition will be used to better tag images that are owned by the organisation implementing the dam, for purposes of recognising their own people.

    I have seen only a few organisations wanting to identify images based upon a wider internet scan.

    To be clear AWS Rekognition offers several different facial recognition capabilities, please see https://aws.amazon.com/rekognition/

    At time of writing there are three types of AI processing of facial images which would fall into the first category above, i.e. analysing images that are owned by the organisation implementing dam of their own people, … and then there is Celebrity recognition.

    Celebrity recognition does maintain a general internet wide awareness of facial information for recognition. The above article definitely does raise appropriate privacy concerns if you were to use Celebrity recognition.

    However if you were wanting to use facial recognition to tag images of your own people, using images that you own/have licensed, and very importantly … have in place appropriate HR support process for your people to be engaged with such processes … have a dam solution that allows you to configure exactly what sort of facial recognition services you want to use … plus have a very secure dam solution used to store such high sensitive information (see NIST, ISO27001 and SOC2) … then using facial recognition can provide you with a very high business return on value with improved accuracy and findability of content held in your dam.

  • Ricky, I can’t speak for Martin Wilson, the author of this piece, but I think there are some points to make clear about the quote from the article you refer to:

    Firstly, the privacy decisions around data generated by FR aren’t determined by DAM vendors (nor even the Digital Asset Managers who may use their products) but the policies and practices of the customers who decide to implement DAM, i.e. the end-user. A vendor may set this up for an organisation with the intent of it being used solely to enhance productivity (e.g. by making it faster and easier to catalogue staff photos) but that doesn’t necessarily mean that it will be employed solely for that purpose either now or at some point in the future. As the vendor, you can’t ultimately exercise a lot of control what a DAM gets used for (nor should you, imo).

    Secondly, if the organisation who has implemented a vendor’s DAM subsequently gets sued because of its use, the vendor may be implicated in any legal action bought against the end-user of the DAM which used FR. The implication is that it’s difficult (and expensive) to get Professional Indemnity insurance if a vendor implements a DAM that uses FR. If the vendor’s PI insurance doesn’t cover them, then they will become liable for all legal costs to defend themselves in any test case. The same applies to implementation partners of DAM vendors also, btw.

    I note your points about adhering to data privacy, safety and security best practices, but (as a vendor or partner thereof) you can do all that, still get sued and then not have your insurer cover you under current UK, EU and some US states data protection legislation. It is a legal minefield, as Martin points out.

    The article is arguing that the legislation about FR and liability resulting from it is faulty and damaging to innovation because it doesn’t properly differentiate between those who implement FR and those who use it.
    The upshot is that insurers currently protect themselves from claims by refusing to cover implementations where FR is used. It doesn’t matter that most DAM end-users won’t use FR for illegitimate purposes, it’s the mere risk that only one of them *might* which is a problem. The issue is not a technological one, it’s about the drafting of current legislation and it’s knock-on effect on PI insurance cover.

    One further point I will observe about this is that if the same legislative precedent applies to Generative AI as it does with FR then this will also make it very difficult to get insurance cover for vendors who decide to use that tech also. As such, this is quite an important issue (which isn’t entirely a technological one) and vendors need to engage with it more than they are doing right now. I suspect this is because most of them haven’t really taken the time to understand what it actually means for them.

Leave a Reply

Your email address will not be published. Required fields are marked *