Reviewing Existing DAM Solutions: Part 2
This is the follow-up to the first part of this article published a couple of weeks ago. In that item, I discussed two issues:
- Current sources of assets
- Lessons that could be learned from previous approaches
One of the points covered in some depth was the likely existence of a legacy database or other software solution that held existing assets. If your organisation has more than a few hundred employees or it deals extensively with rich media data like images, video or audio (digital or otherwise) then there is likely to be at least one application still in active use and any plan you might have to introduce a wide-ranging DAM solution needs to include a review of them.
DAM Archaeology – Uncovering The Reasons Behind The Decline Of Your Lost Media Asset Civilisations
An audit of existing DAM provisions should collect sufficient information about them for you to evaluate what where things may have gone wrong and, if the solution is still in use, the impact replacing it might have. Below are some headings to help structure that analysis:
- The age of the system
- Support issues and myths about functionality
- Contextual considerations
- User opinions
I will expand upon each of those points below.
Age of the system
The age of a previous system as measured by how long it has existed and the duration of time your organisation has been using it is significant for several reasons. If a product has been in service for a long period of time it might have become more engrained into the working practices of those who use it (which implies higher training costs for a replacement). With that said, it could also be the cause of numerous bottlenecks which have cost the business a substantial amount over the years. The trade-off between the upheaval created and the potential savings a replacement might generate needs to be carefully weighed up. This will be one of the key factors in determining whether a replacement will produce a positive ROI in the earlier years or not.
In general, implementation of any kind is an expensive undertaking, irrespective of what you had to pay to licence the product in question and customise or configure it. DAM solutions typically have lifecycles of roughly 5-8 years. If the existing application is newer than that, the ROI case for replacing it must be a highly compelling one as you will be required to go through the change management expense implied by any replacement exercise sooner than would normally be expected. The lesson here is that even if the decision to choose a given solution (perhaps made by a predecessor) was not a good one, it is more cost-effective to first exhaust all the potential methods by which you can retain a system rather than to scrap it mid-life cycle. To be totally clear, if the incumbent application is absolutely useless and everyone regrets the day it was ever introduced then by all means, ditch it, but ensure that this is fully supported by facts rather than vocal complaints of a select group of users.
Support issues and myths about functionality
In the 2-3 years after the initial roll-out of a DAM system usually more faults or anomalies are uncovered than towards the middle or end of its life cycle. At this point, the number of issues typically diminishes as more users become familiar with the application (or think they are as I will explain).
Even though there might be turnover of staff, sufficient numbers of colleagues usually exist to assist new employees with any basic enquiries about an application. Most people prefer to talk to colleagues they deal with on a daily basis rather than to contact a help desk or an account manager. Because of this effect, myths about the limitations or characteristics of the system can develop as a result of one user having a problem with a given function and then deciding to not find out why but just make do without it. On more than a few occasions where I have been asked to review an existing DAM system, key users will say that product X does not support feature Y but when asking for clarification of this from the vendor, it is revealed that it actually does. The lesson from this is to verify all assertions about functionality from multiple sources – including the vendor. These might reveal facts about the legacy application which enable you to prolong its life and defer new implementation for longer, or perhaps using a cheaper approach such as modifications to the legacy application.
The previous point segues neatly into this one. The active useful life (and therefore value obtained) can be extended with regular education and training programmes. Astute DAM vendors recognise that training maximises the duration that their product will remain fully used by clients – and therefore less likely to get replaced with a competitor offering. With that in mind, you need to find out when training was last delivered for an existing system. Many vendors will provide at least some educational programmes for free as part of a general support commitment. If not, that suggests they no longer want to maintain the relationship and you will be obliged to introduce a replacement anyway. Even if formal training has ceased some time ago, learning materials from an earlier more active period might still be available. These can offer useful pointers to the issues the business faced when the original product was first commissioned. Although technologies change, in many organisations, the driving factors that lead to the need for them to begin with often still remain the same over an extended period of time.
Contextual considerations: why someone thought the legacy system was once a good idea
Obviously to get a definitive answer to understanding what purpose the incumbent system was commissioned for, the individual(s) who made the decision need to still be on the scene (and even then they may not have a complete memory). If a legacy DAM system or something that was a precursor to one was licensed some time ago that might be difficult as those employees may have long since left. If anyone is still available, they should be consulted to find out what the context for commissioning a system was and (following the previous point) whether that is still a factor now.
Where an existing solution was in place, it should go without saying that you need to solicit user opinions about it before doing much else, but some care is required with the responses you get back. Users develop a relationship with software applications that can be impacted by their investment into into it and how integral the system is to their current job role and daily tasks. In my experience, even if a legacy DAM solution is difficult to use, if it took someone a long time to work out how to operate it and they are active with it during significant portions of their working day, they will be more wary of having the legacy replaced. Whether this is due to fears about having to repeat the whole exercise again and having their productivity impaired as a result or because they genuinely believe the software is any good, I cannot say, but there needs to be at least terminology and metaphors employed from the legacy application in the replacement to encourage these users to fully embrace the change.
User behaviour data
Logs of user behaviour are useful review tools since they are usually accurate reflections of activity carried out on the system. For that reason, they provide hard data which can be contrasted with user opinions. Of greater interest is where the logging data differs from user’s experiences. There can be numerous reasons why that is the case, but they can be revealing. For example, if the users are telling you that no one ever downloaded any assets because the system was too hard to use, whereas the data suggests that many assets were being accessed that implies that the user sample is not representative – or alternatively that a small number of users did make heavy use of the application. You need to resolve these discrepancies and find out the reasons behind them as they provide the sources of what (if anything) might have gone wrong the first time.
Not all DAM solutions capture detailed audits of user behaviour data at a forensic level of detail. Our DAM Vendors directory indicates that there are still some DAM system vendors who do not consider this an integral requirements and the older the application, the less likely it is to have this feature anyway. With that being said, most complex applications will capture some activity logs as the developers and support personnel require them to help fix problems, so a thorough search should be conducted for anything of this nature. If nothing is available, any existing reporting features are the next best option (especially login and download data). Even an asset search facility, if it allows some sorting by upload or creation dates can act as a pseudo-reporting tool from which some intelligence about long-term usage patterns can be gleaned.
Above all with this kind of evaluation, you need to know why it was once thought a good idea to use a given legacy media asset management system, if the product ever achieved the objectives set out for it and what elements need to be retained (i.e. its best features). An understanding of both the mindset those who commissioned the original application and the people who used it subsequently should give you clues about avoiding any repeat mistakes with a planned replacement – whether the personnel concerned are working for the organisation any longer or not.
In the next part of this article, I will consider more hands-on methods for identifying the assets and metadata held as well as the processes used to manipulate them. Armed with this knowledge, you can decide what to retain and which elements of the behaviour of the legacy DAM solution should persist into a replacement and (as with all DAM implementations) how to manage your risks.Share this Article: