DAM Vital Signs – Performance Review Techniques To Enhance ROI
This special feature has been written by DAM News contributing editor, Ralph Windsor.
This article is about yielding more ROI from your DAM system by reviewing how it is being used, identifying and examining Key Performance Indicators (KPIs). The objective is to gain insights so you can make better informed decisions about prioritising any subsequent investments into your DAM strategy.
Most of the techniques described can be carried out with any Digital Asset Management application, but some may require more effort than others depending on what capabilities come built-in already.
What You Need
To conduct an effective review of your current DAM provision you need:
- User feedback
- Raw data collected directly from the system itself
- Tools to process the data
The first item is a crucial aspect of this process. As much as the system may get you the numbers and help you slice and dice them, you need to reality check the data by talking to as many end users as you can.
What DAM System Raw Data & Processing Tools Can Help?
Two features available in most DAM system can give you some clues or leads about what is happening and enable you to verify user feedback to see how representative it is:
- Audit trail
- Pre-built reports
This is a general log of everything carried out on the system at a forensic level. Most users (especially those who are non-technical) probably won’t care to work directly with this type of report as it is too detailed. However, it should provide the raw data that enables the flexibility to allow you (or whoever you delegate) to conduct a bespoke analysis that might not be offered by the application’s built-in reports. The audit trail should include these basics:
- Name of the user (plus some unique identifier like email address or reference number)
- Activity carried out
- Date/time of activity
- Search terms
- Additional data relating to the activity
You should be able to export this to a spreadsheet. Some systems might have further audits from another perspective (e.g. an asset-centric report or projects etc) those can all help – as long as the core user activity detail is covered.
I would go so far as to say you shouldn’t use a multi-user DAM system that does not include an audit trail as it is like operating a business without proper bookkeeping records or an aeroplane that does not have a black box recorder. It is unlikely that either you or the vendor will have thought of absolutely every conceivable report you will need over the system’s active life so the audit is the fall-back solution you can use when you need to extract data the system lacks a dedicated report for. If you are looking at needing to custom develop reporting features, it can sometimes be more cost-effective to do this in a spreadsheet rather than commence some complex development work. Often at the very least, you can test a few theories in a spreadsheet as a quick method of prototyping a reporting feature before you go ahead with implementing it and then find it’s not as useful as you hoped.
Another critical point to find out is whether it really does track every activity you need it to. I have seen a few systems where everyone involved thought the required data was available, only to subsequently find that some vital auditing detail was not being captured because no one had thought to check this point with the system developers.
You need to audit the audit trail and think carefully about what is being tracked. Another point with these is that the audit reports need careful testing at the User Acceptance Testing (UAT) stage. I’ve seen many DAM systems that are supposed to be tracking something and then it turns out later that a silent database slip-up means that nothing is being recorded. Unlike accounting systems, DAM audit features are unlikely to ever get formally reviewed outside a software testing exercise.
Most systems will contain a variety of pre-built reports. You need to compare these with what you actually need. There is a list of what to look for below but you might need information not described or the vendor could offer a report which I have not discussed. When assessing DAM systems and their reporting/auditing features, it is more important to have a clear idea of the questions you expect to need to know the answers to. Often vendors try to make these features more interesting by using animated charts and other eye candy. There isn’t anything wrong with that, per sé, but it is the facts that these reports tell you that are important, how they get presented is not so critical.
If you needed to define a business case to a senior manager before buying into DAM, the system of choice should help you get the answers that they will expect to hear (or at least the vendor will build those for you). A lot depends on the operational priorities that are prevalent in your sector. Those deploying DAMs into manufacturers, logistics providers or other heavily process oriented markets might be expected to come up with plenty of quantitative justifications as that is how the rest of the business operations tend to be managed. Others might have more discursive ROI assessments where user feedback has a higher priority. I am not going to get into which is the right or wrong method, but your system of choice needs to support you to get answers out that can be easily summarised and presented to your boss in a manner that they will deem acceptable when that time arrives.
What You Need To Know
The kind of questions you have to get answers to include:
- How many users have we got?
- What is the breakdown of that per department or business unit?
- Are user numbers increasing, levelling off or falling?
- When are users accessing the DAM system most (both during a typical day and over a calendar year)?
- What are they searching for?
- Are they finding results?
- Are the average number of search results rising or falling?
- How many assets are getting downloaded and is that number rising or falling?
- How many searches are followed up with a download action?
- What else do people do post-search?
- How many searches (as a proportion) result in nothing being found?
- What are the terms that produce zero results (especially those where no advanced options were used)?
- What are the most and least popular assets?
You can probably think up a few more of your own examples, some of which might be more specific to your organisation or DAM implementation. If the DAM is used for some related tasks, like custom collateral generation or to feed data to various other MRM (Marketing Resource Management) applications then you will need to include those questions too.
The answers are the Key Performance Indicators (KPIs) of the DAM system and are likely to have a big impact on the ROI performance of your wider DAM initiative too. It is better to brainstorm as many possible indicators as possible, record them somewhere (in a project log etc) then decide which ones to prioritise. Quite a few may become more important at later stages of the system lifecycle and if you have them written down then you can review which were rejected earlier and whether they now need to start being checked more formally.
When To Analyse
It is worth carrying out a scheduled review of your DAM system KPIs at regular and pre-defined intervals so you can accurately assess whether the system is continuing to deliver ROI. Managers frequently take a great interest in reports just after launch and then interest tails off when other more pressing projects take priority.
The data you get immediately post-launch or when key upgrades are applied is probably going to be impacted by the initial curiosity of your colleagues. After 3-6 months, the novelty will have worn off and the system will be used more for real world tasks. This is the point where you should start to get more reliable information that offers a more accurate reflection of events.
The Vital Signs To Check
There are many variables you can choose to analyse to get quantitative data that will help assess how things are going with your DAM, below are a few suggestions:
- New User Volume Trends
- Login Trends
- Search/Download Ratios
- Other Search/Action Ratios
- Upload Volume Trends
- Upload/Download Ratios
- Cataloguing Volume Trends
- Upload/Catalogue Ratios
- Most/Least Popular Assets
- Most Common Search Terms
- Zero Result Searches and Trends
It should be emphasised that no single variable can be measured in isolation to extrapolate a conclusion. Each offers a clue about what is happening – but no more. It is your task to put all that data together and form a reasonable hypothesis to explain the trends.
Below I have described each of the items mentioned above and the possible implications of some of the numbers you may get back (note the emphasis on possible).
New Users Volume Trends
The system should track when a new user first accesses the system and differentiate that from other logins. The significance of this variable depends on how many active users you have. If the DAM is used in a big organisation, staff turnover would suggest that the volume of new users will remain fairly constant. If the system is in widespread use, new employees will start to use it to replace their departing colleagues. If the numbers are falling off, this might point to some potential issues or at east a deeper investigation.
This should track existing users and be constant or heading upwards. Depending on the type of organisation you are employed by, external events often also cause spikes or troughs. For example, in most businesses, activity drops off during the summer months when people tend to go on holiday/vacation and also at winter periods (like Christmas and Thanksgiving in the US). These trends may be different in other regions such as Japan, Australia etc for a variety of geographic and cultural reasons. Broadly, the login activity should reflect the seasonal trends for the rest of the business and along with any significant periods, such as product launches or annual reports being published etc.
This is a critical variable and it compares the number of searches with the number of times assets are downloaded. If the number of downloads compared with the volume of searches is falling, it can mean that users aren’t finding anything that meets their needs. This could be due to unsatisfactory cataloguing or that the available assets don’t meet user’s needs.
Other Search/Action Ratios
If you have some restricted assets where users have to ask permission before they can access them, measuring this against the search volume also can identify whether users are more or less willing to go through an extended process to access them. Based on that you can measure the impact of any changes (e.g. making the usage approval process more or less complex).
Upload Volume Trends
Unless you regularly purge, expire or delete assets, the number of asset uploads is going to keep increasing, the more important question is what the number of uploads is doing in a given sampling period (e.g. a month). This needs to be compared with other levels of system activity (like logins) as the trend should have some relationship (although some users will use a quieter period to do a lot of uploading). There might be any number of reasons why less material is being uploaded but you need to understand why with a degree of certainty.
The relative change in these figures will help tell you whether new material is being used and whether what is being supplied is of value to users. You need to take some care with extrapolating from this figure as some kinds of content will have higher levels of demand. For example, if the business re-brands itself, everyone will login to the DAM to get copies of the new logos. This might skew the ratios higher for this compared with a period where more niche assets are getting uploaded. These might not be valuable for everyone, but for those who do download them, they might be the hard to find assets they have been waiting ages to finally get their hands on.
Cataloguing Volume Trends
These figures tell you whether the number of assets users are applying metadata to is going up or down. Again, there can be various different reasons to explain a trend. If you recently ran training, the cataloguing volume should go up. If you ran training and it didn’t move at all then maybe the training content and delivery needs re-evaluation, or possibly someone who used to catalogue hundreds of assets has coincidentally just left the business and others have not yet picked up the slack.
This tells you whether a bottleneck is developing between users uploading materials and them being able to get round to applying metadata to them. If you introduce some facilities such as embedded metadata mapping (so the system uses metadata from the file itself that a photographer may have applied) then this figure might increase. If so, has the search/download ratio declined? This might suggest the quality of the cataloguing is falling off because nothing else is being added to the metadata and the cataloguers are just saving the asset with the default mapped values.
Most/Least Popular Assets
This is often available as a standard report in most DAM systems and obviously it tells you what is being downloaded most of all. Some care is required in ranking assets, especially where many assets have equivalent download figures. The orthodox approach to this issue is to rank the newer asset higher than an older one with an equivalent number of downloads. Rather than looking at a narrow range you need to look at a reasonable sample at the top and bottom of the scale. The top performers might include assets like your company logo etc that you can probably guess without reference to a report. The bottom end might be more useful. If certain assets are not being used, you need to understand why that is. It is risky to assume that this is because no one is interested in them, another possibility is inadequate cataloguing resulting in them never being found. If the system covers it then ranking by number of appearances in search results (in reverse order) can be useful to. This way you can see which assets are being found in search results but still not downloaded which is a more reliable indicator of a lack of demand for them.
Most Common Search Terms
Getting a good idea of what users are searching for can be instructive but you need to keep track of how that changes over time and identify any chronological trends. As with many of these variables, where some more valuable insights can be acquired is by combining them. Therefore, ordering search terms by popularity first of all and then by number of results found will help tell you what is being searched for but also where the number of results is lower than average. This helps inform you whether new assets are needed or possibly that existing assets need to be catalogued in a more relevant manner.
Generic Analysis Methods
I could go on with many more examples of key statistical data to check and there are numerous potential explanations for each scenario that may or may not be relevant in your case. These are the variables which will probably have most impact:
- Asset Records
- Search Terms
- Assets Found From Search Terms
- Asset Downloads
- Asset Uploads
- Assets Catalogued
Combining one or more of the above can help you spot usage trends (although see the statistics health warning below). The time-sample you conduct the report over can also make a difference (e.g. last month, last year, all time etc) and variations in the figures can give offer some more leads.
Administrator Access Only Assets
Many DAM systems have some type of flag or setting which determines whether end users can see them or just the administrators. This is variously referred to as ‘archived’, ‘published’, ‘approved’ etc and the status is generally used to remove general access to an asset without fully deleting it. The terminology is not significant, but when you generate reports or run statistical analysis of raw audit data, it is important to be aware of the figures with or without that setting being enabled. To add further confusion into the mix, many DAM systems use some kind of group permissions schema where assets are visible for some users but not for others. Those can also modify the figures but they might also offer some management insight that is useful (e.g. allowing wider access to a series of assets because the restrictions are too tight).
Examining Statistical Patterns
I am not a statistics expert so my own analysis methods tend to be fairly simplistic. I tend to look for either sudden breakouts in the trends (either up or down) as these suggest something major has changed which has implications for many users. If there is a perceived negative or positive situation with the DAM that users are telling you (e.g. “we can’t find anything”) and a given criteria like the search/download ratio hasn’t budged much for months or years then that tends to re-enforce the end user feedback you are getting as probably being accurate.
If you do have access to someone who is more knowledgeable with statistical analysis they can maybe help you to crunch the numbers to derive some further insight. In addition, some DAM products might already provide you with many of the reports mentioned (and further ones which I have not). Most of these are not especially difficult for vendors to implement if they have the raw auditing data to start with. Even if they are not willing or able to give you automated features to generate these, they should be able to carry out some custom analysis for you either as paid for service or part of their general support and account management agreement.
Statistics Health Warning – Why You Need To Talk To The Users
A common theme that should be emerging with many of these statistical evaluations is there are multiple reasons, some more plausible than others. As should be obvious from my discussion above, for most of these, there are at least two explanations for a given trend (often more). It is not advisable to rely exclusively on a hypothesis derived from one factor alone and even using several might still give you a false impression.
This is the counterpoint to the statistical analysis described above and I would contend that you need to compare it with what people are telling you to derive realistic assessments of how the DAM system is being used and what you need to do to get more value from it.
Getting User Feedback
User feedback can be collected in various ways, such as:
- System Feedback Features
- Dedicated Surveys
- Focus Groups
System Feedback Features
I am not sure how much feedback collected within business oriented IT systems is that useful. From my own anecdotal evidence, many users are reticent to express their opinion about systems as they either don’t have the time to do it or they would prefer to avoid opening a political can of worms by revealing what they really think into some feedback box. Your DAM system isn’t Facebook and users won’t treat it like it is. Whether that is a bad thing is a debateable point but relying on feedback collected in the system itself is unlikely to get you a wide enough sample of data. I am prepared to be corrected on this point by being shown some useful feedback devices that go beyond simple form mailers but I have not seen examples that get over the key challenge of persuading people to use them at a sufficient scale to be useful.
Dedicated surveys would include using some survey or data collection tool which gets emailed out to everyone. These can offer more value as you can ask some more detailed questions and make it clear to users that you want to know their opinions. The problem with them is persuading users to complete the surveys. You will tend to find that most are interested in the DAM system for brief but intensive periods where they need to find some set of assets, after that project or task is concluded, their interest falls off. Carrying out surveys will require planning and persistence – like any other internal communications exercise.
Focus groups involves holding dedicated sessions either in-person (if the users are in close geographic proximity) or possibly via telephone or other remote methods if not. These can sometimes be combined with training sessions to boost attendance. The positive point about this technique is that you get real feedback about user’s perceptions of the system. The negative is that sometimes it can be unstructured and there is always the risk of a small group of individuals monopolising the sessions. A further option with this method is to ask a series of structured questions during the focus groups so you at least get through the points. It can also be useful to ask for a survey to be completed beforehand and provide some methods for those who are not keen on expressing themselves in large groups to submit email feedback later.
When you conduct reviews involving user groups it is advisable to segment the feedback based on who is providing it. Typically for DAM systems there are three main groups of users (with the percentage sample they represent)
- Light occasional users (60%).
- Medium level users (30%)
- Heavy users (10%)
Although the light users account for the majority of the users, their actual usage in terms of time spent logged in will be maybe more like 50% or less.
Another factor is the department or business unit of each user. You will probably find some contain a higher or lower proportion of light vs heavy users. For example, marketing departments might encompass a graphics or creative studio and they are likely to be more involved with nearly everyone using the system at least on a weekly basis. By contrast some departments like Finance or HR etc might rarely need to go into the DAM system other than for highly popular assets. The DAM system might not even be the direct front-end they use if the system is integrated with a Web Content Management application that powers your Intranet.
You might find it necessary to assess feedback with reference to these segments so you can get a more in-depth analysis that tells you more about how the DAM is being perceived and actually used by all of your colleagues.
A question that sometimes gets asked is what to analyse first, user feedback or user data? My opinion has tended towards getting user feedback first then checking it against the stats. Although there is a risk with user feedback that it is not based on a wide enough sample or relies on misconceptions about how the system operates, the fact is that users are the most important element of your Digital Asset Management initiative and what you are providing should be oriented around their needs as much as possible.
If you carry out the numerical analysis up-front there is a danger that you waste lots of time collecting information and reports that will not help optimise the DAM. The feedback you obtain from users needs to be analysed against the statistical data to make sure it representative and also users need to be continuously trained and informed about the capabilities of the system so they do not develop fictitious notions of the limitations (or capabilities) of it.
One last item of practical advice. On several occasions I have carried out these assessments with clients only to find conflicting evidence that points to more than one reason for a given state of affairs. Sometimes you might still find that you just have to go with a hunch to improve the system or some element of the assets within it and then see what happens. The methods described are techniques to help you work out what to do, they should not afflict you with a case of ‘analysis paralysis’. You need to keep your ROI optimisation objectives in mind at all times and act accordingly.
About The Author
Ralph Windsor is Project Director at DAM consultants, Daydream and a contributing editor to DAM News.
Linked In: http://www.linkedin.com/in/daydream