Has Data Protection Legislation Rendered Facial Recognition Technology Practically Unusable for DAM?


Facial Recognition

Martin Wilson, Founder of the Dash and Asset Bank DAM systems as well as Director of Bright Interactive has recently contributed a DAM News feature article, The Rise and Demise of Facial Recognition in DAM.  Martin makes the point that facial recognition is one of the more successful applications of AI in DAM (in contrast to many of the generalised object recognition technologies which are far more hit and miss).

For organisations that need to be able to find photos containing a particular person, the functionality can save a lot of time. Ironically, given the privacy concerns related to it, this is often organisations needing to manage subject consent. For example, typically a school or university can only use photos of current students, meaning when someone leaves they have to find all the photos they are in and stop using them. Imagine having to do this manually. Using facial recognition, it takes seconds.” [Read More]

A stat he quotes (with an attributed source) is that facial recognition can have 99.97% accuracy.  Martin goes on to explore how privacy and data protection legislation has resulted in facial recognition being very difficult to use from a practical perspective:

…some geographic regions consider the face identifier to be biometric data, which is usually classed as special category data. In particular, legislation in many states in the US now requires a user’s consent not just to store biometric data but even before it can be generated. This renders facial recognition functionality unusable for most scenarios – if you can prove that you have the consent of every person in a picture before you scan it then you must have already identified them all.” [Read More]

He further describes how it is nearly impossible to get Professional Indemnity insurance cover for claims that might relate to the use of facial recognition in DAM software.

This is a good article with some excellent, concisely argued points that illustrate the potential minefield which AI tools can present for DAM vendors and users, alike.

My understanding is that the original research for facial recognition was paid for by casinos in Las Vergas who wanted to stop blackjack card counters from entering their premises.  Part of the reason it works so well compared with generalised object recognition is that the problem domain is very tightly defined (i.e. recognising faces) and this produces superior results for AI algorithms because a lot of the extraneous ‘noise’ can be ignored.  Where you read about successful applications of AI, they are nearly always for problems that have a similar tightly defined scope.

If the technology is effective, it becomes far more likely that privacy and data protection related issues will come to the fore, rather than the unreliability of the software.  The insurance aspect (and specifically, the liability issue) is a point I made back in 2016.  If a human being carries out an action or makes an assumption, there is implicit and explicit liability, so there is a far clearer legal counterparty.  If an AI computer program does the same, who is responsible?  The software? The user?  The developer? Someone else?  The implications (and costs) could have huge ramifications.  Insurance companies know this and the potential liability is something they do not wish to be on the hook for in the event of something going awry.

Recently, synthetic content has been discussed quite a lot in some of the more innovative DAM forums, especially the possibility that digital assets could be rendered directly from metadata descriptions or keywords.  In actuality, the ‘synthetic content’ is often sourced from component parts within existing photos, some of which use biometric data where explicit permission has not been obtained. A similar set of privacy implications that have emerged with facial recognition are already starting to surface now, together with new forms of model release licences.

Martin has made a request that other vendors who use facial recognition tools as part of their solutions should contact him to discuss the issues.  I think that is a great idea and while DAM vendors are generally awful at engaging in meaningful dialogue with each other about these kind of serious industry topics, on this occasion, it is in everyone’s collective interest that some kind of discussion is at least initiated.

The full article is here: https://digitalassetmanagementnews.org/features/the-rise-and-demise-of-facial-recognition-in-dam/ and Martin can be contacted on LinkedIn – https://www.linkedin.com/in/martinrwilson/

Share this Article:

Leave a Reply

Your email address will not be published. Required fields are marked *