The Rise and Demise of Facial Recognition in DAM

This feature article was written by Martin Wilson, Founder of Dash and Director of Bright Interactive.

 

If you ask a DAM vendor to highlight the ROI of their solution you’ll hear a lot about how much time it saves.

They might say something like this: with a properly organised DAM solution, you can find perfect visual content for your new marketing campaign in a fraction of the time it would take using simple cloud storage like Google Drive or Dropbox. This leaves you feeling happy and inspired and with lots of time before your deadline to focus on the creative aspects of your campaign.

They may or may not admit that there’s one stage of the DAM lifecycle that definitely isn’t faster with a DAM solution – when new assets are first added.

Let’s be honest, simply dumping everything into Google Drive doesn’t take long. Whereas, to get the most from a DAM solution, you need to tag and organise your assets to make them searchable.

Most organisations investing in DAM realise this is time well spent, which pays back exponentially downstream when people need to find and use the assets. While the UX design of the better DAM solutions make this process as easy as possible, it still takes time.

The promise of auto-tagging

So it’s no surprise that the DAM industry jumped on the potential of machine learning when technologies such as auto-tagging became available a few years ago.

I wrote an article about the mixed results of auto-tagging for DAM News back in 2016. The technology has improved since then, but it is not yet a replacement for humans.

When considering auto-tagging people tend to think of object tagging, where machine learning systems trained on large volumes of images provide tags relevant to the subjects in the photo (“beach”, “sea”, “ice cream” etc).

Another auto-tagging technology is facial recognition, which uses a combination of machine learning and biometric data to automatically tag an asset with a person’s name.

Facial recognition is accurate

Facial recognition is impressively accurate (up to 99.97% according to research conducted in 2020), and therefore can be a huge time saver in DAM.

For organisations that need to be able to find photos containing a particular person, the functionality can save a lot of time. Ironically, given the privacy concerns related to it, this is often organisations needing to manage subject consent. For example, typically a school or university can only use photos of current students, meaning when someone leaves they have to find all the photos they are in and stop using them. Imagine having to do this manually. Using facial recognition, it takes seconds.

Whenever I demoed our platform to customers and prospects, its facial recognition features were almost guaranteed to provide a “wow” moment.

I now have to look for wows elsewhere, as we recently removed facial recognition capabilities from all our DAM solutions. Let me explain why.

Privacy legislation

The heavy lifting for facial recognition functionality in most software applications is provided by back-end services such as Amazon Rekognition. They work by scanning a photo and generating a unique face identifier for each of the faces it sees. The applications making use of this then link each face identifier to a person’s details (for example their name), which is usually entered by a human.

Each face identifier looks like a set of random characters and, in theory (at least in the case of Amazon Rekognition), is meaningless out of the context of the application for which it was generated. This suggests that as long as the application itself is using the data responsibly, the risks for individuals of their personal data being misused are very low, even in the cases of data breaches.

However some geographic regions consider the face identifier to be biometric data, which is usually classed as special category data. In particular, legislation in many states in the US now requires a user’s consent not just to store biometric data but even before it can be generated. This renders facial recognition functionality unusable for most scenarios – if you can prove that you have the consent of every person in a picture before you scan it then you must have already identified them all.

The UK is not quite this strict. For example, the ICO (Information Commissioner’s Office) has this to say:

If you process digital photographs of individuals, this is not automatically biometric data even if you use it for identification purposes. Although a digital image may allow for identification using physical characteristics, it only becomes biometric data if you carry out “specific technical processing”. Usually this involves using the image data to create an individual digital template or profile, which in turn you use for automated image matching and identification.

This would seem to allow facial recognition to be used if the conditions for processing special category data are met. So it would be ok to scan all photos as long as you only store the face identifier for a person if they have given their consent.

So could DAM vendors offer facial recognition capabilities in some regions and not others? I guess so. But what happens if a US citizen appears in a photo stored in the DAM solution of a UK-based company? It’s a legal minefield, and privacy legislation is only going in one direction at the moment.

As a consumer, that’s reassuring. As a software developer, it feels like the right balance between privacy and convenience would allow for more nuance than simply “facial recognition is bad”.

Ruining it for the rest of us

A couple of years ago laws around facial recognition were pretty unclear, prompting John Oliver to describe what he saw as “the chilling expansion of facial recognition technology”.

Using facial recognition to save time tagging pictures of members of your organisation in images you own is one thing. Scraping the Internet for photos of people, grouping them together using facial recognition software and selling that data to anyone who wants it is another. (As I write this, Clearview AI has just been fined for breaching UK data protection laws).

The laws in most regions are now much more explicit, and have come down on the side of privacy. Have we thrown the baby out with the bathwater? I suspect so. Surely these laws could differentiate between different use cases, allowing responsible software vendors and their customers to realise the undeniable benefits of facial recognition?

Pulling the plug

Perhaps they will in the future. Until then it’s just too risky for DAM vendors (the data processors) and their customers (the data controllers) to make use of facial recognition technology. If you are in any doubt about the risk, bear in mind it’s now pretty much impossible to get professional indemnity insurance that will cover claims relating to facial recognition.

So that’s why we pulled the plug on it.

Do any other DAM vendors continue to offer it? If so, I would welcome a conversation in the comments (or you can contact me privately via our website or LinkedIn – see below) to hear how you are managing to do this legally, and without risk to your own organisation or your customers. Perhaps we’ve missed something and we’re being overly cautious. I don’t think so.

 

This feature article was written by Martin Wilson, Founder of Dash and Director of Bright Interactive.

Share this Article:

Leave a Reply

Your email address will not be published.