Subconscious and Conscious Data: Where AI & Machine Learning Could Create Genuine Value for Digital Asset Management (DAM)

This post originally appeared on DAM News in June 2019.

 

Recently I have been working with a client who was dissatisfied with the results they were getting from AI image recognition tools they were using for their DAM initiative.  They had read an article I had written about my theory that metadata is contextual data about digital assets which could be derived from digital asset supply chains.

We started to look into Machine Learning (ML) techniques to see whether inferences could be gleaned from the auditing data their DAM solution was collecting.  What became apparent was that they were limited by the lack of behavioural data from both asset users and asset suppliers for searching and cataloguing, respectively.  Data, in general (and metadata, in particular) was the key missing ingredient.

This realisation helped me to re-frame my approach and prompted some research into more psychological and even philosophical techniques to help get closer to solving the automated digital asset cataloguing problem in DAM.  In this article I am going to explain the limitations of the current approaches to using AI in DAM and propose what I believe is a superior method.

The Issues with Current AI/ML Tools & DAM Solutions

As I write, nearly every commercially available DAM solution has now implemented support for some kind of AI tool.  None has developed one in-house, however and all use one or more components marketed by third party tool vendors. Previously, I have compared these components to conjuring tricks.  They are not really ‘intelligent’ (according to most definitions of the term) instead they depend on a combination of statistical probability and the industrialisation of the pattern matching process to enable a lot more prospective matches to get checked.

I believe most of the current DAM vendor attempts to utilise AI either have now (or soon will) result in failure.  My anecdotal evidence is that within six months of deployment, about 70% of clients ask for these features to either get switched off or the metadata they provide to be ring-fenced and excluded from normal searches.  This is roughly equivalent to getting an inexperienced person like an intern or trainee to catalogue your digital assets and then throwing away 70% of their work because it is unusable.  This is the current reality of AI-based digital asset cataloguing: vendors have succeeded in automating the generation of very low quality metadata (and they have needed to use someone else’s product to even do that).

Whenever I sit through a demo of these components, they all take a roughly similar trajectory.  The vendor will upload a single generic and easily recognisable image (such as a common object or scene) and the keywords suggested will initially seem quite reasonable.  In the minds of the audience, the problem is now ‘solved’ – based on one image.  When these get applied to real world Enterprise DAM, however, either large numbers of digital assets have exactly the same keywords, or are very generic and of limited value as a result.

An important, but frequently overlooked point about AI image recognition components is that they are designed for large-scale mass market use cases, not the kind of marketing/corporate users who are the typical customers of many DAM vendors.  For evidence of this, I note that ClarifAI target the wedding photographer market.  They will certainly take any revenue on offer from the DAM sector, but I suspect they do not regard it has a very high priority, especially given the complexities of the Digital Asset Management problem domain and its relatively limited revenue opportunity.

An Alternative Approach

The core problem is that most Enterprise DAM users are not searching for generic and non-context dependent assets.  The fact you can see an image of a tall building may not mean much, what is important is whether or not it is a photo of your head office.  The physical characteristics of the image are far less important than the relevance of the subject matter to you and your current digital asset needs.  The context is the significant factor determining whether or not an asset’s metadata will help you find it or not.

Where Does Contextual Relevance Come From?

Trying to get the context purely from the pixels in the image itself is going to be quite difficult unless someone has already done the job for you – and across all the entire spectrum of contexts that comprise your current range of interests .  How likely is that?  Think of all the images you have looked at in the last week (everything on your smartphone, computer, TV, printed magazines, books, billboards, sides of busses etc).  How many do you think you have seen in total?

Even if you could train an AI tool to understand the contextual relevance of all this stuff to you, specifically, doing it over billions or even trillions of images is a fool’s errand.  Yet this is what is currently being proposed by image recognition tool vendors.  Somehow this approach will get better because someone, somewhere will ‘automagically’ solve this problem.  Does that seem realistic?

As mentioned earlier, that some image recognition tool vendors have been required to introduce specialised modules or dedicated plug-ins to identify ever-more diverse subjects provides clear evidence of the lack of sustainability of this approach.  The end-game is no end-game; ever more complex algorithms will be needed to try and deal with increasingly sophisticated detection requirements.  Essentially, they are hoping to develop a functional replacement for human-based visual interpretation strategies (and the billions of years of evolution which has helped refine them).  As the saying goes, hope is not a strategy.

The question then becomes where else can we get the contextual clues that will help pinpoint the subjective relevance of an image?  In the rest of this article I will explain a method I believe could be far more effective and easier to implement than those in-use at present.

Subconscious and Conscious Data

“What do I know?”

Michel du Montaigne, 1560

 “I know that I am thinking”   ( I think therefore I am)

Rene Descartes, 1640

To help devise a model for generating contextual hints that might yield useful automated digital asset metadata let’s analyse how human beings deal with processing data and see if there are any insights which can be applied to DAM systems.

Subconsciously, human beings remember everything that has ever happened to them and no memory is ever lost.  In digital terms, this is comparable to raw data.  What determines whether or not you will consciously remember something is your belief system.  As discussed in another article I have written previously: Metadata = Context = Meaning = Value.  In DAM terms, therefore, a belief system is synonymous with a metadata model.

To give a simple example, it is quite common that when you meet someone for the first time and they tell you their name that you may forget it.  Even though it will have been subconsciously committed to memory, you may not always consciously recall it because this new person probably has little relevance to you at that time.  If you keep meeting them and they become more significant in your work or social circles then you are increasingly more likely to memorise their name.  Similarly, if they are attractive, can potentially help you to achieve other goals or have some unique attribute which is outside your normal range of experience the probability of you remembering it also rises.

There is parallel here with the inability of a DAM user to find a relevant digital asset.  If it lacks contextual clues (metadata) the asset will be meaningless and difficult to find unless it was so important someone added it to a dedicated collection beforehand.  Contextual relevance is fundamental to being able to recall anything for both human beings and DAM systems.  If a memory has nothing which relates it to other memories, you won’t consciously remember it.  Similarly if your DAM lacks a metadata model that relates an item of data to a digital asset, you won’t find that either.  Metadata is contextual data or what you could call ‘conscious data’ for your DAM.

The Implications of Subconscious and Conscious Data

By approaching DAM and Machine Learning from a data and metadata perspective (subconscious and conscious data) there is a far more realistic possibility of replicating the human interpretation skills that are essential for generating genuinely useful automated metadata.  If it were possible to digitally capture every interaction that an experienced digital asset manager had with a collection of digital assets you have some of the raw data that would provide the potential to reverse engineer their behaviour and reproduce it more faithfully.

Clearly, not everything can be linked back to an interaction with a computer (let alone a DAM system) e.g. when they are thinking about cataloguing issues and not using any digital device.  The fact that ever more interactions occur in a digital environment, however, increases the range and depth of data which can be collected to help improve this process.  This has some implications for both vendors and end users of DAM systems which I will examine below.

All Data is Potentially Valuable

As should be apparent from the above discussion, the more data you have available, the greater the opportunity to obtain insights from it.  This implies that as much as possible should be collected when DAM systems are being used.  If something generates a signal, it could have a value in a particular context.

Simply put, if any entity (at all) is changed, that fact needs to be captured.   Seemingly unimportant details like the delay between key presses, ‘hot spot’ areas of a screen that the user interacts with and more obvious examples like the speed with which some categories of asset get catalogued compared with others all yield potential insights.  When I say ‘everything’, I mean exactly that.

At this stage, DAM system developers and engineers will typically complain about excessive data storage and ‘bloated databases’ etc.  Cue lots of groaning about ‘do we really need to store all this stuff’ or ‘surely we can delete x, y or z?’  The bottom line is this: unlike the human subconscious, everything in an IT system is lost forever unless you explicitly specify that it should be retained.  The more data that is not captured, the more lost opportunities there are to glean inferences and predictive analytics which can drive more effective AI or ML solutions.  There is no way you can accurately predict when any of the data you just dispatched into the digital black hole will become valuable.

One of the fundamental economic reasons why data is not a commodity is because the marginal value of it increases the more of it you have.  Analyse the behaviour of large mega vendors like Google, Facebook etc; they capture everything and warehouse the data whether it has a current known value or not.  The depth and the scale of the dataset increases its value exponentially (and the commercial valuation of their businesses, by proxy).  The potential power this affords them is only just being grasped by the wider world.  This is why user volume is considered a key metric when assessing the value of a technology business.  The more users, the greater the opportunities to collect data about them and re-contextualise it (i.e. create value via metadata).

Transactions: The Atomic Unit of Value in DAM Systems

This is another area that DAM vendors are far too blasé about.   I recommend to my clients that they choose vendors who both have a detailed audit trail of everything that has happened in their DAM, further, that they provide full and open access to this.  When I ask vendors to see their audit capabilities, what tends to get shown is frequently a mishmash of reporting tools which often look like they have been haphazardly assembled and then post-rationalised to appear like they offer a credible asset analytics facility

Any serious DAM system these days should have an audit trail which administrators can review at the transaction level if they want to.  This shouldn’t be a system/database report but it needs to reveal (as far as is humanly possible) every interaction users have with the DAM.  In theory, you should be able to play the audit trail back like a video recording of everything that happened on the DAM system.  I have written an in-depth technical article about this subject for DAM News which I recommend DAM developers should peruse if they want to understand what I mean from a lower level perspective.

Each transaction contains the raw materials to observe and model what users are doing on the DAM system to make inferences and predictions about their behaviour.  The audit trail is the DAM equivalent of human subconscious.  The more granular and detailed it is, the more precise and accurate the insights which can be gleaned.  Having interaction data stored as transactions in a log that never gets deleted and goes right back to whenever the system was first switched on is the base unit of value inside DAM systems.  Even the digital assets themselves are simply a composite of a series of transactions and (potentially) some intrinsic data like a bitmap image.  The action is in the transactions, both literally and metaphorically.

DAM Vendors Currently Only Have Bit Parts in The AI/ML Story

While Machine Learning is frequently mentioned by vendor marketing materials, they rarely implement any form of it themselves.  The reason is because getting useful results from ML necessitates custom development work and frequently their core architecture was never built with this kind of use-case in mind.

Most DAM vendors struggle just to keep their platforms current due to all the functionality multiple clients have asked them to add over the years.  As such, they will typically resist any attempt to take on custom implementation work.  There are practical reasons for this policy, for example they may fear ending up with multiple editions of their base platform.  Even if they can somehow avoid that outcome, however, there is the support burden of having yet another moving part to maintain.

With a very small number of exceptions, many DAM platforms lack true architectural scalability.  The pain of dealing with this is not something private equity and venture capital investors are willing to subsidise and the unfunded vendors usually lack the cash and/or human resources to do it as comprehensively as they want or need to.  Privately (and very occasionally publicly) many will admit this.

At the recent Henry Stewart DAM conference in New York, I engaged in a public discussion with a vendor about the fact that the advanced forms of AI/ML which generate useful results require custom development work and (for the most part) they are not keen to get involved as a result.  Due to a lack of time made available for questions, I was not able to continue this discussion, but somehow, somewhere, it needs to happen.  Without a resolution to this this problem, DAM vendors will be side-lined by AI/ML and their role will become little more than resellers of someone else’s technology.

How DAM Vendors Can Write Themselves Back Into The Plot

There are two approaches which could alleviate the issue I have just described.

The first is for vendors to focus on re-architecting their solutions in a more transaction-oriented fashion with everything routed through their APIs: what you might call ‘Transactional API First’.  The core architecture of most DAM platforms is quite often a secret mess that many vendors currently have on-going, long-term initiatives to try and resolve.  Contrary to what many would expect, some of the more recent entrants have even worse problems than older vendors where they have taken an excessive number of shortcuts to rapidly gain functional equivalence to their peers.  There are also a slew of legacy DAMs where the architecture has been abandoned because management have taken the decision to maximise profits from existing clients in the form of support and licence revenues rather than innovating due to the cost and risk involved.  Quite a few mid-market DAM firms are positioned between both extremes and the owners of those operations are now dealing with the albatross they have created for themselves over the last 10-15 years.

The second approach is for DAM vendors to have far more developed partner networks than most have right now.  As described, DAM vendors’ current relationship with the AI and ML toolsets they use in their platforms is one of a reseller or channel partner.  They add very little in terms of value for the end user.  Instead, they need to be operating at a far lower level in the AI/ML application stack by generating and managing the transaction log upon which their own partners can develop custom solutions (via their API).

The current image recognition tools are like a pair of eyes that is disconnected from a brain.  They will have some utility and in helping to rationalise and refine the transactional data being collected by DAM systems, but should be added afterwards, rather than before.  The fact they are not hints at over-reductive and linear thinking being applied to this problem by DAM software engineers.  I talk to quite a few (at all different levels of seniority) and I believe many would probably understand this flaw in their understanding better if they were not preoccupied with numerous other more pressing tasks that take away the time they have available for anything more conceptual in-nature.

Many of the financial and human resource constraints could be avoided to leave more time for innovation by having a proper partner ecosystem.  Vendors need to accept that their role is to provide a transactional infrastructure onto which third parties can add value.  This means they will be required to give up both revenue and some of the opportunity to work with technologies that are considered ‘cool’ or fashionable.  For that trade-off, however, they will also be embedded at a deep and fundamental level into the digital asset supply chains of their customers, with all the opportunities that presents for sustainable long term innovation and growing their businesses as a consequence.  For those who have an interest in this subject, I recommend reading my recent interview with David Diamond, who offers some very useful advice based on actual experience of dealing with DAM vendor partner networks.

Conclusion

“We can determine any outcome if we know all the inputs and their influences.”

Laplace, 1779

“Uncertainty cannot be predicted by any probabilistic method.”

Keynes and Knight, 1921

I contend that the methods described in this article offer greater potential than the primitive and ham-fisted AI techniques currently in-use by DAM systems, however, they still won’t ever fully replace human intelligence.  Human subconscious and conscious memory are analogue in nature, i.e. we continuously record activity without discernible pauses.  Contrast this with packets of digital data which are (by definition) discrete.  The transaction log I described earlier depends not only on a systems designer deciding to record something but also to apply a value judgement to determine where one transaction stops and another starts.

AI and ML tools are developed by human beings who have incomplete models of the real world (or ‘belief systems’ to use my earlier comparison).  Software marketers are keen to anthropomorphise AI technologies as though they are mysterious, chrome-coloured magical entities, but the reality is they are just inanimate computer programs that cannot be tested properly because the range of available inputs can never be fully enumerated.  The more sophisticated the technology gets, the more human-like it will have to become, with the corresponding reduction in its reliability that is likely to result.

I still hold the view that the nature of the automated metadata problem is far more demanding to solve than most technologists either currently understand or are prepared to admit.  The sell-side interests in the DAM industry have a chequered history of underestimating the complexity of Digital Asset Management as an activity, while also overestimating the value of their own contribution to the process.  With that said, there is an opportunity to make some incremental improvements that move towards this objective and for that reason, it is still worth investigating what possibilities are on offer.  My objective with this article is to provoke discussion and consideration of some alternative (and I would argue, superior) methods than those that I see employed in DAM solutions currently.

Leave a Reply

Your email address will not be published. Required fields are marked *