Finding Signs Of Life In DAM: Diagnosing What Has Gone Wrong
This feature article has been contributed by Ralph Windsor, editor of DAM News.
Recently, among some who have been involved in the DAM industry for a long period of time, there has been what can only be termed despair at the lack of progress the market has made when measured up against other enterprise software segments. Some are saying the DAM industry is ‘dead’ and will never achieve the potential which has been claimed for it. I believe this proposition is incorrect, however, I can appreciate why people might arrive at that conclusion. In this piece, I will offer a diagnosis that I think explains the problem, in the follow-up, I will outline steps which could rectify it.
When these kind of topics come up for discussion, they invite commentators to consider wide-ranging causes and ‘big picture’ considerations. Since an entire market is under discussion (composed of many hundreds of suppliers) the expectation is that some deeper issues must be at work. There are elements of truth to that analysis, but the danger of examining the causes in those terms is they disguise problems which are actually fairly straightforward to explain (even if they are not easy to solve). I intend to do that here and I will use case studies and experiences acquired from real DAM implementation exercises where my opinion has been sought by clients, as such, my analysis is based on experience, not conjecture.
My considered opinion (based on more than twenty years working in this field) is that DAM has failed to capture the same level of interest as some other enterprise technologies because the biggest DAM-related problem faced by users has never been properly addressed. The principal reason why organisations commence DAM initiatives is to allow users to find digital assets more easily (and historically this has meant content-related assets). Many organisations realise they have a lot more digital media than they ever did before and buy into the necessity to manage it because of all the side-effects that can occur if it is allowed to become a free-for-all. As such, the top priority for all DAM strategies is the need to isolate specific set of digital assets that meet a given set of criteria; it’s not searching for assets that underpins DAM, but having sufficient confidence that you will be able to find them again. A lot of time and effort has been invested into technology to index textual data about digital assets, far less on the ‘finding’ problem (which is the one users really care about). To use a well-known sales analogy, people don’t buy drills, they buy something which makes holes and with DAM, the key requirement is something that will allow them to find their assets. This is not to say a lot of other capabilities in DAM like renditions, re-purposing etc are unimportant, but if you can’t find suitable material to start with, everything else becomes meaningless.
In a recent article for DAM News about DAM innovation, Martin Wilson of Assetbank wrote the following:
“Tagging assets is a boring, labour-intensive manual process, crying out for a better solution. The amount of time it takes to enter metadata in order to make assets findable is a massive pain point for users. Large organisations spend hundreds of person-hours on this every week, and this hasn’t changed since the early days. An innovative solution is much needed.” [Read More]
In the case of corporate repositories of digital assets, cataloguing (or ‘tagging’, call it what you will) is not only ‘boring and labour-intensive’, it’s very hard to do it properly also because it is far more intellectually demanding than many people realise. The reason is because what we all glibly summarise as ‘metadata’ is a vast repository of all kinds of domain knowledge, cultural associations, experiences and shared (or not) history which not only changes across different organisations but people also. Metadata is what makes digital assets meaningful to human beings, in all their many different forms. As a result, identifying metadata which allows multiple prospective asset users to find what they are looking for is far harder to do effectively than it initially appears. As a type of human activity it has similarities with literary or editorial work (and a lot of otherwise intelligent people often find those kinds of tasks quite difficult too). This fact is routinely ignored or discounted by DAM users, vendors, consultants and analysts, i.e. everyone currently connected with DAM.
Based on my experiences, developing, selling, supporting, advising and using DAM systems (whether ones I was responsible for, or those provided by someone else) the single biggest issue that restricts adoption, above all others, is DAM users not being able to find things: the ‘zero relevant results problem’. To illustrate this, I will offer two examples of organisations I have worked with in the past:
One of my clients was a subsidiary of a museum and had responsibility for cataloguing a collection of media about artifacts, but it was DAM, not collections management so the emphasis was still on media rather than objects, per sé. When I went through their requirements with them, it transpired the client had already been carrying out cataloguing of their digital assets for some time, but they had used a very rudimentary and low-tech method. An MS Word document had been created and on each page, they pasted a copy of each photo and a table with the metadata contained alongside it. They had designed their own metadata schema and the cataloguing was both consistently applied and detailed. The ‘records’ (pages) were among some of the best quality source material I have seen. Even using Word’s basic text search, finding asset records was not their main challenge, instead, it was the restrictions imposed by using a desktop application designed to write documents as a faux-DAM solution, for example, the fact that Word files only allow one person to edit the same document at a time or the very large size the file had grown to causing Word to crash randomly. These problems are ones ‘proper’ DAM systems should never have, so the resulting solution fully met its project objectives and delivered them an enviable ROI. What should be obvious from this short case study is that the DAM system utilised could not claim credit for the ROI, it was the end-users themselves who did the ‘heavy lifting’, as tech people like to say.
Taking another, opposite example, I worked with a large commercial client who had spent a six figure sum on a DAM solution, not just on product licences but professional services to customise the solution, fault-tolerant external hosting and CDNs etc so it could be accessed globally with minimal download times. A lot of effort went into the project planning for the implementation to get it all to work as per the requirements spec. The DAM was initially adopted, but user numbers fell off a few months after launch and very little in the way of new material was being uploaded. A familiar catch-22 scenario arose where new assets were not ingested so users could not find anything recent and did not consider it worth their while uploading anything themselves either. After some investigation, it was revealed that there was a small team of people who managed an analogue and digital media library before (where requests were manually processed) but they had all been made redundant on the basis that the DAM system would replace them. The introduction of DAM had a negative ROI because there was no one who could find anything and media became even more dispersed than it had done previously. To start to get this DAM back on track, metadata specialists needed to be hired as well as in-house subject-matter-experts to brief them. Since the latter were all full-time members of staff, there was a further cost to the business in the form of lost time which otherwise would have got devoted to their normal day-jobs. In contrast to the previous example, a substantial up-front investment was made on the technology, but it had no real effect. Further the ‘human software’ which held the keys to realising a more favourable ROI had all been sacked, before any assessment had been carried out of whether there was a benefit to retaining them or not. I later discovered that the staff cost saving was included in the business-case section of the project plan as one of the main ROI factors – so the organisation had shot themselves in the foot before they even started. The DAM vendor and their solution were no more responsible for this state of affairs than the previous example.
High quality metadata which is essential to obtaining ROI from DAM is so rarely forthcoming. Apart from some specialist niches where the end users are strongly motivated to take an interest in metadata because of background, training or because the commercial success of the business is predicated on digital assets getting found (e.g. e-commerce operations or stock media libraries) most other ‘normal’ DAM users lack the necessary commitment required to carry out cataloguing tasks adequately. All the standard methods have one or more drawbacks:
- If you ask regular employees to tag/catalogue assets, many quickly get bored of the task and skimp on it either by entering the absolute minimum, or by misusing the batch entry tools so everything is applied with the same categories etc, whether relevant or not. Sometimes a minority of end users will understand the need to carry this task out diligently, but because only they do it, their assets get found more often than all the others. As a result, the repository is skewed towards whatever material they have supplied (cue complaints from other users about only being able to find x type of assets, where x is the stuff the people who did the job properly were responsible for).
- If you assign juniors and interns to do this, they lack enough domain expertise to include project, product or business-related metadata of the kind that is essential for assets to get found later when regular staff carry out searches.
- If you outsource it to offshore providers they too lack the knowledge to do the task satisfactorily and there can be further risks introduced such as inadequate briefing or poor quality management by either client or supplier.
- If you hire keyworders or metadata professionals, the cost is generally prohibitive for all but a small selection of key assets and it is still necessary to find staff to brief them and quality control their work also.
One of the reasons why this might have become the case was hinted at in my reference to ‘specialist niches’ above. There is a direct line of descent from photo management applications that were first introduced 20-25 years ago to modern DAM systems: the metaphors and UI conventions used etc are still very similar. The typical end-users of these products were stock media libraries who needed to organise images to sell them (either through an on-line catalogue or based on image requests from picture buyers). The delivery model has been copied, but the circumstances are different because most corporate DAM users have no interest in selling their digital assets, the use-case is entirely about improving productivity. Because there are no ‘customers’ to satisfy, there is far less of a motivation to catalogue digital assets properly. There is a highly significant piece of the puzzle missing from DAM as a result of this model getting copied, but without any thought given to the fact that the target users are not the same any longer, so they are unfit for their current purpose as a result.
Most DAM vendors don’t have any proper answers to this problem because while they usually know a lot about software, their understanding of the digital asset management aspect of the problem is unsophisticated and over-simplified. Whenever I engage DAM vendors about the metadata issue, with a very small number of exceptions, they quickly want to come off the subject and move towards something they regard as more interesting (although image recognition has become the latest shiny new distraction of choice, as I discuss below). This is one of the main problems DAM (as a discipline) has to deal with: virtually no one producing the tools wants to deal with the biggest problem faced because they don’t know how to, nor even do they want to acknowledge it exists. This attitude has become ingrained in vendors over time and it’s why you get the ‘toys for the boys’ fetishisation of ‘cool’, gimmicky new features and so much mindless replication of functionality from one DAM software product brand to the next.
One further option alluded to above is to hire dedicated digital asset managers who have an executive responsibility for the digital asset metadata. As described in my two case studies, investing in human resources for DAM has a superior ROI than software-only options (and enables organisations to get more value from the technology also). With that said, there are simply too many content digital assets in circulation now and the volumes will increase exponentially over time. At some point, it will be necessary to delegate metadata entry to either non-DAM specialist human beings or seek out automation strategies. In the medium or long-term, the most that can be expected from human digital asset managers is more of an executive role, they simply won’t have the time to get hands-on with individual digital assets very much anymore.
At this point, I imagine that any technologists who are still reading will want to mention AI (Artificial Intelligence). In the context of DAM what they usually mean is image recognition. While these are undoubtedly highly complex systems to code, the mathematical models that they rely on are still too primitive to handle the sophistication that is needed to derive complex metadata of the kind necessary for most DAM use-cases (see my earlier description of metadata to understand why this problem is intellectually demanding). This is why most of the various recognition systems struggle to generate more than about one or two genuinely useful keywords (and fewer than that if the subject matter is in any way unconventional or requires domain expertise). With some notable exceptions, most of the widely publicised material I have read about AI has been written by tech marketing people who don’t understand it well enough to offer informed comment. There is a lot of implicit acceptance that either the technology already works as well as the people selling it want you to believe, or some idea that the ‘invisible hand’ of progress will miraculously improve it over time (“they will make it better” – whoever ‘they’ are). The only invisible hand at work here are private equity interests who have placed bets on demand for AI remaining robust so they can withdraw their stakes at a significant profit before the current cycle completes once more. This is the reason AI is getting promoted a lot currently, not because it is necessarily yielding reliable results. I do think AI could help address some of the metadata challenges I have described, but the odds are stacked against it using the commodity image recognition tools that are currently in-vogue with DAM vendors. A re-evaluation is needed so the agenda of DAM users can be prioritised, rather than them being exploited as a cheap source of data equity which the image recognition tool vendors can leverage for their own benefit.
I have made a fairly extensive critique of all the current options on offer for dealing with the metadata quality challenge which I believe is the reason DAM’s progress has been less successful than it should be, so what else can be done? The answer lies in two interdependent concepts, neither of which have received anywhere near sufficient attention to date: digital asset supply chains and DAM interoperability. The digital asset supply chain in most DAM operations (i.e. DAM users) is ad-hoc, disorganised and lacking in standards or conventions when compared with ones that you might encounter in industries where supply chain management is afforded far greater prominence (e.g. manufacturing, logistics etc). The necessity for DAM to understand the relevance of supply chains and acknowledge the benefits they offer is something we have been talking about on DAM News for the last four years, although I am aware a number of other people have had similar ideas too. At present when users get asked to provide metadata for digital assets, the default method employed by DAM systems is to show a form with what is essentially a questionnaire about an asset. There are variations on the theme like ‘wizards’ and sometimes batch entry tools using spreadsheets or templates etc, but if you need to catalogue 20-30 assets, then you can expect to be seeing this interface roughly the same number of times (especially if they are all different from each other in some way). This is highly inefficient, time-consuming and very boring for most users – which is why they don’t like doing it. What if, however, it was possible to gain some visibility about how those same 20-30 assets had got to the stage where they were considered candidates to hold on the DAM? This is where the digital asset supply chain could provide some answers and it will be driven by standards for interoperability that make the implementation progressively easier over time. I do not pretend this is going to be quick or simple to achieve, nor will it remove all the pain-points, but there is clearly a continuous improvement opportunity which deserves to be explored in more depth and is currently not being given the attention it deserves.
In the second part of this article I will discuss how I think digital asset supply chains could be put to use to solve the metadata/findability challenge and consider some methods for delivering interoperability in a less painful manner than has been proposed so far. I will also evaluate some further options for safely automating the process and consider where AI techniques could have a role to play also, even with the limitations and reliability issues demonstrated by the current technologies.