On CMSwire,com, Henrik de Gyor has written an article Should I Auto-tag or Crowdsource my Metadata.
The summary of the post is that there are a combination of either automated methods and crowdsourcing techniques which can potentially be used for asset metadata cataloguing, but you need to test each of them carefully and have realistic expectations about how useful they will be. All of that is solid advice and clearly Henrik has well grounded experience of critically evaluating these technologies.
I would also agree with Henrik when he says he was ‘not amazed with the results’ of many of the automated methods – I can’t say I have been impressed either. He appends ‘(yet)’ to his analysis, my own view is that might be a little optimistic and it’s more a case of ‘if’ rather than ‘when’ you might get something useful out of them where the quality of the result is comparable with what a trained human cataloguer can come up with (or even an untrained one in many cases).
The facial recognition example is potentially more useful, although still imperfect and not something you can fully and safely automate. I gather that a lot of the investment funds that helped contribute to this technology becoming commercially usable came from casinos who wanted to identify known blackjack card counters – so you would expect some progress to be made there!
I have seen the crowdsourcing method also where multiple people tag images and the common terms chosen by all those involved are extracted and used as the ones to apply to cataloguing. Google Images tried a similar approach some years ago with the Google Image Labeler. It is reminiscent of a very old data processing technique where two operators are asked to key in the same data and if there are differences then the input is checked to see which one is accurate.
The main issues I would have with this part of the article are that the examples of the ones where crowdsourcing has been effective are sports or culture related. These kind of subjects are a lot easier to persuade sufficient numbers of people to get involved and help with cataloguing assets for you. If your DAM system holds more prosaic material (as is the case with many corporate media libraries, especially in B2B sectors) then people may be rather less willing to give up their time to lend a hand.
Henrik doesn’t devote much to another technique, human beings cataloguing assets by using the metadata entry tools present in DAM systems, but he does offer this:
“The great fear I keep hearing from some individuals is “when will machines replace us?” At the time I write this post, we are far from that point. We do however rely more on machines to assist us every day. I will point out that even some of the most advanced (publicly released) artificial intelligence relies on humans to check and tune the accuracy of its algorithms. Even Watson was taking text prompted clues while the human competitors received a text and verbal prompted clue during a televised game show where humans competed against the machine.” [Read More]
I don’t think the fear that machines will one day replace humans is a legitimate concern that people need to worry about at this stage of the game.
In my opinion, the concept of ‘machine intelligence’ usually proves itself to be an oxymoron when you apply it to anything more complex than a highly controlled range of samples (like the Watson example from the quote above). As Henrik alludes to at the end of the article, user’s expectations for DAM are potentially at risk of being raised to unrealistically high levels. The implication, therefore, if you were counting on this to do the work for you is that an unplanned cost may end up being incurred to get Digital Asset Managers, picture researchers and keyworders to fix the problems when the ‘rocket science’ doesn’t quite live up to its billing (literally or metaphorically). These damage the ROI case for DAM and if the issues are not tackled robustly, end users will doubt the claims made for it, irrespective of which vendor or consultant they might originate from.
For documents and to a lesser extent video or audio with spoken word content (if they feature any) there is some potential to extract machine readable data without manual effort – although the quality may be well below par and if there isn’t any narrative element, the options are limited also. The biggest challenge is probably images. Unfortunately, that is the asset type that many end users still buy a DAM system for and the one that end users still strongly associate with Digital Asset Management, legitimately or otherwise.
In my view, the key challenge is a failure on the part of users and vendors alike to fully comprehend the nature of the task as well as the lack of training or support given to those assigned to carry it out.
On the first area, it’s necessary to acknowledge what people need to do when they catalogue assets and dig deeper into why they are doing it. We’re all familiar with the basics, assets need metadata, otherwise you can’t find them in searches and the cost of originating or buying them is wasted. Metadata is what transforms a basic file into a digital asset.
It doesn’t stop at just any metadata though, it needs to be relevant terms that directly map to the expectations of searchers. For example. if it’s a photo of a red British telephone box in a street in London, then that’s what the metadata should include, at an absolute minimum. If the end users are also likely to use specialist terms such as model or serial numbers then these should be present too. This is blatantly obvious stuff and like all forms of common sense, most people are better at recognising it than they are at doing it. This type of cataloguing is also very complicated to reproduce with automated computer software as it requires a higher level of intuitive intelligence that human beings have far superior capabilities in – but only if they are trained and motivated to actually use them.
The reasons the metadata in many DAM systems is so poor is a combination of the cost of getting people to do it and a lack of end user education about metadata. On the cost argument, I would contend that if you don’t do it properly to start with then you won’t find your assets and any investment in DAM is at risk of becoming an ROI write-off. It’s better to factor in the expense of adequate metadata cataloguing into your original investment case justification for DAM rather than hoping you can either coerce an unwilling crowd of your employees to do it or by splashing out on some automated product that falls over once you start feeding it assets that aren’t ones from the vendor’s test samples.
On the education point, my prediction is that metadata is the next big DAM skills challenge and it needs to go right to the core of HR operations in organisations of all types and sizes. A number of years ago, people used to talk about ‘computer literacy’ and there was an increasing expectation that prospective or existing employees would understand the basics of IT and be able to use word processors, spreadsheets etc. I don’t see that so often in job ads these days, mainly because the majority of employees either learnt this in school or college, alternatively, they taught themselves because they needed to figure it out to do a task or secure a job. Now, in the 21st century, ‘metadata literacy’ seems like it should definitely be on the modern education agenda, as there is a clear need for it.
These days, workers of all kinds are required to maintain all kinds of collections and in their personal lives. As well as anything they might do with a DAM system, employees will usually maintain their own personal digital archives of photos, music, video and often some other more esoteric asset type as well. The issues they face at work with a businesses’ digital media are starting to mirror many of the ones they have at home. I am not overly-impressed with the term, ‘content curation’, but it does succinctly describe an increasing element to many people’s personal and professional lives.
Using my earlier example about IT vs Metadata literacy, you don’t expect that a word processor will write a letter or article for you, nor a spreadsheet prepare a budget forecast. There are a range of utility functions and features that will enhance your productivity and help reduce the time required to complete the work, but it’s up to you to both come up with the raw material and check the final result. Just like most documents that have been generated automatically aren’t something you would be comfortable putting in front of your boss, clients or customers, so the same should be true with asset metadata.
I can acknowledge the value in some of the semantic technologies being researched these days and one day they may assist human beings to do the task more quickly and also prompt users for alternative metadata they might have thought to include, but that time has not arrived and it’s not in prospect soon either. We all need to get used to this fact and face up to the challenge so that we can extract the ROI from DAM solutions that they already have the capability to generate for us.
- Can Enterprise Taxonomy Management Survive Analyst Reticence - And Does Anyone Else Care Anyway?
- The Role Of Taxonomy Governance In DAM Interoperability Initiatives
- Google's Visual Case Study Of The Perils And Politics Of Automated Metadata
- The Perils And Politics Of Automated Metadata Generation
- Understanding And Implementing Metadata Standards In Digital Asset Management Initiatives