You are here: Home » Finding Signs Of Life In DAM: Diagnosing What Has Gone Wrong

Finding Signs Of Life In DAM: Diagnosing What Has Gone Wrong

This article is going to be restricted to DAM News subscribers after 31st August 2020. It will be free to access, but you must have a DAM News Subscriber account to read it. Register Now.
Subscription Only

This feature article has been contributed by Ralph Windsor, editor of DAM News.


Recently, among some who have been involved in the DAM industry for a long period of time, there has been what can only be termed despair at the lack of progress the market has made when measured up against other enterprise software segments.  Some are saying the DAM industry is ‘dead’ and will never achieve the potential which has been claimed for it.  I believe this proposition is incorrect, however, I can appreciate why people might arrive at that conclusion.  In this piece, I will offer a diagnosis that I think explains the problem, in the follow-up, I will outline steps which could rectify it.

When these kind of topics come up for discussion, they invite commentators to consider wide-ranging causes and ‘big picture’ considerations.  Since an entire market is under discussion (composed of many hundreds of suppliers) the expectation is that some deeper issues must be at work.  There are elements of truth to that analysis, but the danger of examining the causes in those terms is they disguise problems which are actually fairly straightforward to explain (even if they are not easy to solve).  I intend to do that here and I will use case studies and experiences acquired from real DAM implementation exercises where my opinion has been sought by clients, as such, my analysis is based on experience, not conjecture.

My considered opinion (based on more than twenty years working in this field) is that DAM has failed to capture the same level of interest as some other enterprise technologies because the biggest DAM-related problem faced by users has never been properly addressed.  The principal reason why organisations commence DAM initiatives is to allow users to find digital assets more easily (and historically this has meant content-related assets).  Many organisations realise they have a lot more digital media than they ever did before and buy into the necessity to manage it because of all the side-effects that can occur if it is allowed to become a free-for-all.  As such, the top priority for all DAM strategies is the need to isolate specific set of digital assets that meet a given set of criteria; it’s not searching for assets that underpins DAM, but having sufficient confidence that you will be able to find them again.  A lot of time and effort has been invested into technology to index textual data about digital assets, far less on the ‘finding’ problem (which is the one users really care about).  To use a well-known sales analogy, people don’t buy drills, they buy something which makes holes and with DAM, the key requirement is something that will allow them to find their assets.  This is not to say a lot of other capabilities in DAM like renditions, re-purposing etc are unimportant, but if you can’t find suitable material to start with, everything else becomes meaningless.

In a recent article for DAM News about DAM innovation, Martin Wilson of Assetbank wrote the following:

Tagging assets is a boring, labour-intensive manual process, crying out for a better solution. The amount of time it takes to enter metadata in order to make assets findable is a massive pain point for users. Large organisations spend hundreds of person-hours on this every week, and this hasn’t changed since the early days. An innovative solution is much needed.” [Read More]

In the case of corporate repositories of digital assets, cataloguing (or ‘tagging’, call it what you will) is not only ‘boring and labour-intensive’, it’s very hard to do it properly also because it is far more intellectually demanding than many people realise.  The reason is because what we all glibly summarise as ‘metadata’ is a vast repository of all kinds of domain knowledge, cultural associations, experiences and shared (or not) history which not only changes across different organisations but people also.  Metadata is what makes digital assets meaningful to human beings, in all their many different forms.  As a result, identifying metadata which allows multiple prospective asset users to find what they are looking for is far harder to do effectively than it initially appears.  As a type of human activity it has similarities with literary or editorial work (and a lot of otherwise intelligent people often find those kinds of tasks quite difficult too).  This fact is routinely ignored or discounted by DAM users, vendors, consultants and analysts, i.e. everyone currently connected with DAM.

Based on my experiences, developing, selling, supporting, advising and using DAM systems (whether ones I was responsible for, or those provided by someone else) the single biggest issue that restricts adoption, above all others, is DAM users not being able to find things: the ‘zero relevant results problem’.   To illustrate this, I will offer two examples of organisations I have worked with in the past:

One of my clients was a subsidiary of a museum and had responsibility for cataloguing a collection of media about artifacts, but it was DAM, not collections management so the emphasis was still on media rather than objects, per sé.  When I went through their requirements with them, it transpired the client had already been carrying out cataloguing of their digital assets for some time, but they had used a very rudimentary and low-tech method.  An MS Word document had been created and on each page, they pasted a copy of each photo and a table with the metadata contained alongside it.  They had designed their own metadata schema and the cataloguing was both consistently applied and detailed.  The ‘records’ (pages) were among some of the best quality source material I have seen.  Even using Word’s basic text search, finding asset records was not their main challenge, instead, it was the restrictions imposed by using a desktop application designed to write documents as a faux-DAM solution, for example, the fact that Word files only allow one person to edit the same document at a time or the very large size the file had grown to causing Word to crash randomly.  These problems are ones ‘proper’ DAM systems should never have, so the resulting solution fully met its project objectives and delivered them an enviable ROI.  What should be obvious from this short case study is that the DAM system utilised could not claim credit for the ROI, it was the end-users themselves who did the ‘heavy lifting’, as tech people like to say.

Taking another, opposite example, I worked with a large commercial client who had spent a six figure sum on a DAM solution, not just on product licences but professional services to customise the solution, fault-tolerant external hosting and CDNs etc so it could be accessed globally with minimal download times.  A lot of effort went into the project planning for the implementation to get it all to work as per the requirements spec.  The DAM was initially adopted, but user numbers fell off a few months after launch and very little in the way of new material was being uploaded.  A familiar catch-22 scenario arose where new assets were not ingested so users could not find anything recent and did not consider it worth their while uploading anything themselves either.  After some investigation, it was revealed that there was a small team of people who managed an analogue and digital media library before (where requests were manually processed) but they had all been made redundant on the basis that the DAM system would replace them.  The introduction of DAM had a negative ROI because there was no one who could find anything and media became even more dispersed than it had done previously.  To start to get this DAM back on track, metadata specialists needed to be hired as well as in-house subject-matter-experts to brief them.  Since the latter were all full-time members of staff, there was a further cost to the business in the form of lost time which otherwise would have got devoted to their normal day-jobs.  In contrast to the previous example, a substantial up-front investment was made on the technology, but it had no real effect.  Further the ‘human software’ which held the keys to realising a more favourable ROI had all been sacked, before any assessment had been carried out of whether there was a benefit to retaining them or not.  I later discovered that the staff cost saving was included in the business-case section of the project plan as one of the main ROI factors – so the organisation had shot themselves in the foot before they even started.  The DAM vendor and their solution were no more responsible for this state of affairs than the previous example.

High quality metadata which is essential to obtaining ROI from DAM is so rarely forthcoming.  Apart from some specialist niches where the end users are strongly motivated to take an interest in metadata because of background, training or because the commercial success of the business is predicated on digital assets getting found (e.g. e-commerce operations or stock media libraries) most other ‘normal’ DAM users lack the necessary commitment required to carry out cataloguing tasks adequately.  All the standard methods have one or more drawbacks:

  • If you ask regular employees to tag/catalogue assets, many quickly get bored of the task and skimp on it either by entering the absolute minimum, or by misusing the batch entry tools so everything is applied with the same categories etc, whether relevant or not.  Sometimes a minority of end users will understand the need to carry this task out diligently, but because only they do it, their assets get found more often than all the others.  As a result, the repository is skewed towards whatever material they have supplied (cue complaints from other users about only being able to find x type of assets, where x is the stuff the people who did the job properly were responsible for).
  • If you assign juniors and interns to do this, they lack enough domain expertise to include project, product or business-related metadata of the kind that is essential for assets to get found later when regular staff carry out searches.
  • If you outsource it to offshore providers they too lack the knowledge to do the task satisfactorily and there can be further risks introduced such as inadequate briefing or poor quality management by either client or supplier.
  • If you hire keyworders or metadata professionals, the cost is generally prohibitive for all but a small selection of key assets and it is still necessary to find staff to brief them and quality control their work also.

One of the reasons why this might have become the case was hinted at in my reference to ‘specialist niches’ above.  There is a direct line of descent from photo management applications that were first introduced 20-25 years ago to modern DAM systems: the metaphors and UI conventions used etc are still very similar.  The typical end-users of these products were stock media libraries who needed to organise images to sell them (either through an on-line catalogue or based on image requests from picture buyers).  The delivery model has been copied, but the circumstances are different because most corporate DAM users have no interest in selling their digital assets, the use-case is entirely about improving productivity.  Because there are no ‘customers’ to satisfy, there is far less of a motivation to catalogue digital assets properly.  There is a highly significant piece of the puzzle missing from DAM as a result of this model getting copied, but without any thought given to the fact that the target users are not the same any longer, so they are unfit for their current purpose as a result.

Most DAM vendors don’t have any proper answers to this problem because while they usually know a lot about software, their understanding of the digital asset management aspect of the problem is unsophisticated and over-simplified.  Whenever I engage DAM vendors about the metadata issue, with a very small number of exceptions, they quickly want to come off the subject and move towards something they regard as more interesting (although image recognition has become the latest shiny new distraction of choice, as I discuss below).  This is one of the main problems DAM (as a discipline) has to deal with: virtually no one producing the tools wants to deal with the biggest problem faced because they don’t know how to, nor even do they want to acknowledge it exists.  This attitude has become ingrained in vendors over time and it’s why you get the ‘toys for the boys’ fetishisation of ‘cool’, gimmicky new features and so much mindless replication of functionality from one DAM software product brand to the next.

One further option alluded to above is to hire dedicated digital asset managers who have an executive responsibility for the digital asset metadata.  As described in my two case studies, investing in human resources for DAM has a superior ROI than software-only options (and enables organisations to get more value from the technology also).  With that said, there are simply too many content digital assets in circulation now and the volumes will increase exponentially over time.  At some point, it will be necessary to delegate metadata entry to either non-DAM specialist human beings or seek out automation strategies.  In the medium or long-term, the most that can be expected from human digital asset managers is more of an executive role, they simply won’t have the time to get hands-on with individual digital assets very much anymore.

At this point, I imagine that any technologists who are still reading will want to mention AI (Artificial Intelligence).  In the context of DAM what they usually mean is image recognition.  While these are undoubtedly highly complex systems to code, the mathematical models that they rely on are still too primitive to handle the sophistication that is needed to derive complex metadata of the kind necessary for most DAM use-cases (see my earlier description of metadata to understand why this problem is intellectually demanding).  This is why most of the various recognition systems struggle to generate more than about one or two genuinely useful keywords (and fewer than that if the subject matter is in any way unconventional or requires domain expertise).  With some notable exceptions, most of the widely publicised material I have read about AI has been written by tech marketing people who don’t understand it well enough to offer informed comment.  There is a lot of implicit acceptance that either the technology already works as well as the people selling it want you to believe, or some idea that the ‘invisible hand’ of progress will miraculously improve it over time (“they will make it better” – whoever ‘they’ are).  The only invisible hand at work here are private equity interests who have placed bets on demand for AI remaining robust so they can withdraw their stakes at a significant profit before the current cycle completes once more.  This is the reason AI is getting promoted a lot currently, not because it is necessarily yielding reliable results.  I do think AI could help address some of the metadata challenges I have described, but the odds are stacked against it using the commodity image recognition tools that are currently in-vogue with DAM vendors.  A re-evaluation is needed so the agenda of DAM users can be prioritised, rather than them being exploited as a cheap source of data equity which the image recognition tool vendors can leverage for their own benefit.

I have made a fairly extensive critique of all the current options on offer for dealing with the metadata quality challenge which I believe is the reason DAM’s progress has been less successful than it should be, so what else can be done?  The answer lies in two interdependent concepts, neither of which have received anywhere near sufficient attention to date: digital asset supply chains and DAM interoperability.  The digital asset supply chain in most DAM operations (i.e. DAM users) is ad-hoc, disorganised and lacking in standards or conventions when compared with ones that you might encounter in industries where supply chain management is afforded far greater prominence (e.g. manufacturing, logistics etc).  The necessity for DAM to understand the relevance of supply chains and acknowledge the benefits they offer is something we have been talking about on DAM News for the last four years, although I am aware a number of other people have had similar ideas too.  At present when users get asked to provide metadata for digital assets, the default method employed by DAM systems is to show a form with what is essentially a questionnaire about an asset.  There are variations on the theme like ‘wizards’ and sometimes batch entry tools using spreadsheets or templates etc, but if you need to catalogue 20-30 assets, then you can expect to be seeing this interface roughly the same number of times (especially if they are all different from each other in some way).  This is highly inefficient, time-consuming and very boring for most users – which is why they don’t like doing it.  What if, however, it was possible to gain some visibility about how those same 20-30 assets had got to the stage where they were considered candidates to hold on the DAM?  This is where the digital asset supply chain could provide some answers and it will be driven by standards for interoperability that make the implementation progressively easier over time.  I do not pretend this is going to be quick or simple to achieve, nor will it remove all the pain-points, but there is clearly a continuous improvement opportunity which deserves to be explored in more depth and is currently not being given the attention it deserves.

In the second part of this article I will discuss how I think digital asset supply chains could be put to use to solve the metadata/findability challenge and consider some methods for delivering interoperability in a less painful manner than has been proposed so far.  I will also evaluate some further options for safely automating the process and consider where AI techniques could have a role to play also, even with the limitations and reliability issues demonstrated by the current technologies.

{ 3 comments… read them below or add one }

Deb Fanslow April 1, 2017 at 1:49 pm

Excellent points, Ralph. I also think a big piece of the “no one wants to enter metadata” situation is the lack of understanding and accountability from leadership within organizations on the value of information management – including, as you mention, metadata and the digital supply chain as equally critical to business as the physical supply chain.

I agree that DAM professionals will never be able to keep up with the amount of assets that need to be catalogued; it needs to be an organization-wide effort, with buy-in from executives, and accountability across the enterprise. And yes, AI will help alleviate some of the burden as it matures, but it’s just a tool. And every tool requires human oversight, maintenance, and resources – search engines need to be tailored, algorithms need to be trained, and vocabularies will still need to be maintained.

Despite the fact that many organizations do understand the value of digital marketing, they haven’t seemed to connect the dots with how data and metadata originate and flow throughout and beyond their organizations to support critical business decisions. Part of the problem is also the mindset that there’s never enough time, money, or resources – so the dictate is to do the minimum now to get things out the door, then worry about the ramifications later. From my experience, it seems that “there’s never enough time to get it right, but there always time to do it over.” Planning, creating policies and procedures, delivering adequate training, governing, and optimizing processes and workflows along the digital supply chain always takes a back seat to the short-sighted goal of meeting immediate deadlines.

DAM is not just the management of digital assets – it’s how effective management of those digital assets enables the larger digital supply chain to deliver information in support of critical business decisions. Leadership wouldn’t dare make decisions without accurate financial data, yet the data that describes their content is deemed of little value. DAM needs to tie in to the metrics that matter to leadership, and, as you outlined, use the supply chain metaphor that they already understand. Perhaps then the value of data and metadata (the “I” in “IT”) will become a critical topic within boardrooms, and cultures will shift to support accountability for creating quality data and metadata to support organizations’ information ecosystems.

Companies who are in the business of creating content have of course had to figure this out early on. At some point, everyone else will need to realize that we’re all content creators and publishers now!

Mark Milsten April 2, 2017 at 6:11 am

Hi Ralph, Thank you for this incredibly well written and on target DAM state of affairs. Like, many in this industry, we too find the current disconnect between where DAM currently lies and where we all imagine it should be to disappoint.

Anyone who has invested time, sweat and brain cells expected by now to see DAM to be universally understood and occupying a place within the software ecosphere more closer to the center of gravity, i.e, the C Suite, than the digital equivalent of the asteroid belt.

You wrote, “Whenever I engage DAM vendors about the metadata issue, with a very small number of exceptions, they quickly want to come off the subject and move towards something they regard as more interesting (although image recognition has become the latest shiny new distraction of choice, as I discuss below).”

I too had the same experience. I have spent every opportunity over the past years trying to engage DAM vendors at every opportunity with the aim of offering up a host of solutions to this 800 pound gorilla like pain point only to be ignored, rebuffed or just plain stared at with incomprehension.

All of us at Microstocksolutions LLC naturally saw ourselves as “that” missing link.

We assumed that it would be obvious to DAM vendors that there was a lot to be gained by hearing about how we have for a decade been the primary provider of ingestion, curation and metadata solutions for the stock media industry.

On any given day, we manage the metadata on more than 10,000+ visual assets.

If anyone had the in-house expertise to help DAM clients and DAM vendors understand and make the connections between their assets and ROI, it was us.

The continuously repeated mantra of “Because there are no ‘customers’ to satisfy, there is far less of a motivation to catalogue digital assets properly” simply doesn’t hold water.

Everyone on planet Earth who hasn’t been living under a rock or in a cave for the past two decades now has minimal digital experience expectations; much of which has been developed via Google.

Providing a Google like metadata experience is what we do, and what quite a few well-known metadata industry pros also do.

Artificial intelligence (AI), as you rightly pointed out, is not even close to solving any of these issues. Think of AI as the office intern. Young. Not very knowledgeable about exactly what you or your needs are, and in need of full-time management and oversight.

And while we have not yet given up, we’ve had to realize that if the DAM vendors weren’t going to accept advice or be willing to learn how to better serve, then we were going to take our message directly to the corporate world.

Spencer Harris April 5, 2017 at 2:21 pm

This is a great article and really on point with what the underlying issue is for DAM adoption amongst users. I have spent the past 18 months rolling out a solution and trying to get our team to use the system. To this day adoption is still really low, and the number one reason they give for not adopting, I can’t find what I need and the old way is faster (which is hoping on the file server and manually searching through folders).

I have noticed from observation that their complaint is valid. There have been time after time where I have watched our users type in a partial file name or project name to search for what they are looking for, knowing a head of time there is less than 20 options out there, but yet the search results are in the thousands and in some cases tens of thousands.

Part of what I have noticed is that unless otherwise specified, DAM searches always start broad and will search almost every metadata field with the goal of returning as many results as possible. This strategy of more is better, is not very effective. It overwhelms the users.

In my eyes there are two main users of DAMs, internal staff and everyone else that is external to the company. Right now most DAMs are layout and behave in a way to make searching by external staff as successful as possible by, again being as broad as possible in order to give you as many choices as possible.

The reality, most DAMs are implemented for internal staff use as their primary reason. Use by any external users is a side ‘benefit’. What this means, is that the internal user thinks and searches differently, mostly by using institutional terms and other identifiers such as job or project names and numbers. Yet, the search interface and organization of assets are not easily customizable to this approach. DAMs need to have two different interfaces, with two different ways of searching, organizing and accessing the assets.

Internal interface should never by default be set to a broad or general search, but the most common way internal users think to search by. I have worked for both an internal, corporate marketing department and an AD agency with dozens of clients. In both cases the internal staff most commonly want to search by a job/project name and/or a job number. If we could automate metadata tagging to even just these two popular metadata terms, it would improve searching and user adoption in volumes I can’t even quantify.

Likewise for other types of organizations, what are the top five ways users look for assets? Those need to be the standard search options. In the event someone really wants to just ‘wander’ through the DAM like they are casually window shopping at the mall, then there can be an option for that type search. But, in my experience users are going to the DAM looking for specific content, not general wandering.

The external facing interface can keep with the Google-esk style to searching. Tagging for that user group is where there bigger effort can be spent depending how important the company deems it, like in the case of Stock Agencies.

And to the point of AI for keyword generation, when it comes to keyword generating for internal user benefit/use it will never really know what institutional information will need to added to an asset. Automatic keywording based on any sort of organization the assets have within the DAM (such as folder structure or collection) is going to be the closest thing to accurate AI.

My biggest point for all of this, if it isn’t clear by now, DAMs need to be designed with internal users as their primary focus and their success in finding their assets. Because right now, they are designed more for the external user.

I spent the last year in demos of over two dozen different solutions. I didn’t walk way with really any one particular vendor in mind that I felt solved this problem. There were a few where you could spend a lot of time (and money) customizing the interface to be more internal user friendly, but nothing that was more ready out of the box.

I think the first DAM vendor to adopt this mindset and design the interface accordingly, will have this biggest competitive advantage over the rest.

Leave a Comment