More Fibs That DAM Vendors Sometimes Tell
A few weeks ago I wrote the first article in this occasional series about the fibs that some DAM vendors tell. I discussed the type of disingenuous statements vendors come out with in-relation to roadmaps and their support for features in more generalised terms. This time, I want to get a bit more hands-dirty with some implementation-related fallacies which you might come across. These are quite in-depth, so for this article I have only included two. They should illustrate why you have to be doggedly persistent when it comes to checking over DAM systems before you agree to buy one.
Our API will do anything. Yes, I’m absolutely certain our DAM is API-First
API means Application Programming Interface, in simplified terms, it is a method by which one computer program can control another. If you are not familiar with them, I wrote an introductory article about APIs a couple of years ago. For the majority of DAM users, APIs are absolutely integral to the functioning of their Digital Asset Supply Chains.
Many prospective DAM users think that if an API exists then, ipso facto, anything you can do as a regular human user you can do via the API. Perusing vendor API documentation can sometimes give you the impression that it does indeed do everything that you might need it to and for basic tasks, this is often true. Where things can get a bit more dicey, however, is when it comes to automating some obscure combination of activities. Rights, permissions, complex metadata or new features that have only just been released tend be where APIs most frequently have blindspots that can catch DAM users out. In many cases the vendor’s developers won’t have realised there is an issue themselves, which means they may only implement a missing API call when someone asks for it.
There are two practical ways to deal with this problem. One is to carry out a proof of concept test, the other is to looks for an API-first DAM. The former involves using the vendor’s API to carry out some simulated integration tasks and seeing how far you can get with them. The vendor can be asked to put this together, but preferably a neutral third party does it. The test must be representative of what you need the API to do and using real data. Although this can be a hassle to organise it does give you some hard facts which can help reveal whether or not the vendor’s API is really as fully featured as you need it to be.
API First is discussed in the article I referred to earlier. It means that the entire user interface is routed through the API, therefore, whatever you can do with your fingers (i.e. via a mouse, keyboard etc.) you are guaranteed to be able to do the same via the API. Far too many DAM vendor sales reps still don’t know what API-First means (especially those whose products don’t have it). If the vendor claims they are API-First without being asked, they generally are (although you still can’t take them entirely on trust). If they are evasive, unsure or use language like ‘I’m fairly certain we are’ then this can suggest that their DAM is not API-First.
One method to verify this is to carry out a series of operations using the interface and then ask for a log file of all the API calls. As with the previously described test, this should be done with some more complex tasks rather than simpler activities like logging on, searching etc.. You can then forensically go through the log and see what the DAM is doing.
Theoretically, you should be able to re-run the calls in the API log again to automatically carry out the tasks that were carried out manually. If the vendor is unwilling to get you the log or it isn’t clear how to reproduce the same activity, their DAM might not really be API-First and you need to do the test outlined previously. This is quite a technical point and if there is a concern that you have found a flaw in their application, expect all kinds of wriggling, evasiveness and lots of assurances that ‘everything will be fine’. Don’t believe them, you need to see it working. Those vendors that have the capabilities built, tested and deployed in working order will be only too pleased to show them to you.
As some readers might be thinking by now, this is not exactly trivial stuff and some technical expertise is required. A credible DAM consultant should be able to do all this for you, but if you lack the budget for that, someone with basic software development skills in your IT department is also likely have the required expertise.
Although this step is a bit painful, it will weed out any disingenuous vendors who are making unsubstantiated claims about their APIs. In addition, the amount of headaches saved compared with finding out that the API isn’t sufficient for your needs later makes this worthwhile. In terms of regrets over choosing DAMs, finding out that the API of the product was not as fully-featured as expected (or required) seems to be one of the ones I encounter most frequently when speaking with clients.
Our sophisticated AI tools mean manual metadata entry is a thing of the past
As most DAM buyers quickly grasp, the key problem with DAM systems is that despite the not insubstantial cost, the human beings still have to do most of the work of cataloguing assets with descriptive metadata – which is where the true value in digital assets lies. Vendors know this is a big factor which mitigates against end users buying or replacing a DAM system.
It is fully understandable why AI and Machine Learning has been heavily promoted because if it actually solved the metadata problem, it would dramatically transform the vendor’s value proposition. The snag is that it just doesn’t work consistently or reliably enough to be usable. Despite this, many vendors have pinned their hopes on a successful outcome eventually arriving. This fib, therefore, is as much one told by vendors to themselves at is their customers.
It is over four years since we first discussed AI image recognition tools on DAM News and despite a lot of hype about how it would ‘inevitably’ improve, the generic image recognition components that DAM vendors employ still have not got significantly better. The same set of issues occur over and over again where the AI cannot understand the contextual relevance of a digital asset. I have seen a few attempts at solving this (and even proposed some solutions of my own) but to keep this brief, let’s just say that there is very little in the way of tangible product that you can actually use currently.
Privately, most vendors will tacitly admit that the AI components they use are ineffective for the majority of their users. In the last year I happened to be reviewing a vendor’s system for a client I was working with and they were ebulliently promoting the advantages of their AI image recognition features during a sales demo. The following week, another consultant asked me to join them on a call with the same vendor. The audience for the second demonstration were prospective channel partners and resellers (since my consultant associate was considering becoming one and wanted me to give him a second opinion about their offer). The contrast was quite marked, in the latter call, the same vendor who had been touting the benefits of their AI cataloguing tools to customers the week before was thoroughly dismissive of them and was at pains to point out that it was very easy to disable these features for a ‘real world’ implementation.
I get why vendors want AI to deliver on its promise for DAM and I can even understand the reasons behind some of them coming out with the type of fibs referred to in the title for this section, but as an end user of DAM you need to deal with the current practical reality, not the promise for the exact same reasons I discussed before in-relation to product roadmaps and beta features.
One interesting point with this particular DAM fallacy is that there will often be quite a few individuals within the purchasing organisation who also want to believe it too. As such, to debunk the AI myth and get a more realistic assessment of what a new DAM will do for your organisation you may be forced to prove the shortcomings to colleagues as much as the vendor. This is especially the case if the proposal is to buy a new DAM system so you can sack (or not appoint) a dedicated human Digital Asset Manager.
A simple test to discover if the DAM’s AI component will yield usable metadata is to find a decent cross section of image digital assets which you use right now. Around 40 or 50 is probably a good number. These need to be specific to your organisation and include subjects like key personnel, products, projects/initiatives, events, locations etc. Once uploaded, the candidate DAM system should then generate a series of keywords. In theory, you won’t need to add anything else yourself, the system should do everything. Then go and find some colleagues who have not seen the images nor been involved in briefing the vendor and ask them to search for the digital assets. Ideally you want them to search using terms that are highly specific to your organisation. The kind of searches they might need to do when using the DAM for a typical project they are involved with now, not some abstract or generic test exercise.
As well as the search, a further piece of analysis is to look at what keywords the DAM’s AI capability has suggested. If some of these are not relevant and appear to be wrong, remember that these will still get found in searches. False positives were always a problem in DAM search technology. With AI, however, it becomes one on a potentially industrial scale. This issue might not normally reveal itself until thousands of assets have been incorrectly tagged by the AI tool. When/if it does (and it’s probably a ‘when’) you are likely to get reports from users that the DAM is ‘broken’ because it gives them search results which don’t match what they were expecting to find.
Unless you are prepared to put in-place findability training for upload users, develop a carefully designed metadata model and/or taxonomy as well as governance processes to manage them then you will probably not get decent results out of AI. After you have carried out the aforementioned tasks, you might subsequently discover that you didn’t need the AI anyway as the processes you put in-place are more effective and robust.
One conclusion reached by some of the more enlightened vendors and DAM developers I have encountered is that effective AI for DAM requires a level of customisation for each customer before it can become useful. The off-the-peg tools that DAM vendors plug into are too blunt and imprecise for the vast majority of users’ needs. If you are sold on automated AI-based cataloguing, it needs to be a separate project outside the scope of your purchase of a DAM system because the requirements, risks and overall nature of the project are quite a different undertaking.
Conclusion
These two fibs I have discussed in this article can be harder to pin down than the previous examples because they require DAM users to get into some fairly in-depth testing and analysis. With that said, if you take vendor assurances on-trust, it can prove to be quite an expensive error. One point not always fully appreciated by DAM purchasing authorities is that once a decision has been finalised, they themselves become closely affiliated with their selected vendor from the perspective of everyone else in their organisation. As such, it is essential you choose who you plan to partner up with quite carefully and discover early on if there are any unexpected skeletons in their DAM closet.
If you found this article useful, there are many similar tips, strategies, tactics and advice in a report I wrote recently: How To Buy Enterprise DAM Systems which is available with a $100 discount for DAM end-users. If you are a vendor, consultant, investor or service provider you can get a free copy of this with a DAM News Premium subscription which is available at a discount until the end of July 2020.
Share this Article: