AI and DAM: A Pick ‘n’ Mix Approach To Automated Metadata?
Artificial Intelligence is here to stay; much like the way blockchain technology has been hijacked and eclipsed by the frantic rise and fall of cryptocurrency, so too has AI been prematurely blasted from the over-zealous scattergun of emergent technology and hyped as the next big thing before it has reached maturity.
DAM solutions provider OpenAsset have recently published a short article entitled ‘Delivering Artificial Intelligence in DAM: Image Similarity Search’, and whilst graciously admitting that the (increasingly controversial) Amazon Rekognition API wasn’t suitable for their needs, it appears they couldn’t resist the temptation of bundling some kind of AI functionality into their platform.
“Having concluded that the keyword suggestions from the API could not be used to automatically tag images, we re-assessed how AI could be incorporated into OpenAsset and arrived at two new lines of enquiry:
- AI driven image similarity search
- AI driven keyword suggestions
By leveraging data from the image recognition platform Amazon Rekognition, we developed a system that computes the visual similarity between images. Users may already be familiar with image similarity or ‘reverse image search’ features in Google, Bing, Instagram or Pinterest” [Read More]
If you haven’t considered the basic mechanics of reverse image searches, or even how arbitrary ‘here’s some related stuff’ features work, their relevance to the evolution of AI might not be immediately apparent. The more reference material a storage and retrieval system can gobble up and be taught to recognise as belonging to any given tag, category, group or context, the better it is at gauging what else it can offer up by inference. However, choosing to serve up any picture of ‘glass skyscraper against blue sky’ is vastly different to recognising whether a particular picture of a skyscraper against a blue sky is the headquarters of X or Y’s business.
Strictly speaking, AI is the ability of machines to exhibit human-like intelligence, whereas deep learning – the technology behind Rekognition – is based on training the system to recognise images by feeding it a massive number of labelled images for reference. Using the term Artificial Intelligence to describe machine-learning is an increasingly common malapropism, and as Amazon clearly point out, Rekognition has no understanding of the relationship between these labels:
“It is important to note that these labels are independent, in the sense that the deep learning model does not explicitly understand the relationship between, for example, dogs and animals. It just so happens that both of these labels were simultaneously present on the dog-centric training material presented to Rekognition” [Read More]
When it comes to DAM, as my co-writer Ralph Windsor has consistently pointed out, contextual metadata – the one thing that’s missing from the majority of AI solutions – is essential for endowing digital assets with any useful degree of relevance and searchability. One of Ralph’s recent articles ‘What it Will Take for Artificial Intelligence to Become Useful for DAM’, goes some way to explaining the image recognition paradox and raises several of its shortcomings:
“While image recognition offers a source of some literal descriptive metadata (i.e. what something looks like in universal terms), it is a poor source of the kind of metadata most enterprise users require to find the relevant digital assets for their needs.
A superior, and mostly untapped source of contextual awareness which could generate credible metadata is digital asset supply chains and the text or quantitative metadata which they generate.” [Read More]
What is apparent from OpenAsset’s findings and numerous other documented cases of AI failing to live up to expectations is that it’s not a silver bullet; at this moment in time, a carefully selected pick ‘n’ mix of conventional metadata processing techniques combined with the best that AI currently has to offer appears to make the most sense. In a separate article, entitled ‘Using Digital Asset Supply Chain Management And AI To Improve Efficiency And Enhance Metadata Quality’, Ralph further examines this combined approach when considering the implementation of AI-based functionality within the DAM supply chain.
”Simply put, AI image recognition is only half the story of your digital asset metadata. What you need is the information from your digital asset supply chain: the supporting emails, documents, meetings, workflow and discussions that lead to your assets being created or commissioned from suppliers in the first place. Using this data, in combination with AI tools and other information sources, it is possible to derive far more credible digital asset metadata which means something to users and therefore dramatically improves the chances of finding assets later.” [Read More]
A similar conclusion was reached by Martin Wilson of Bright Interactive in his article ‘AI in DAM: The Challenges and Opportunities’, where he writes about auto-tagging:
“My prediction is that although many DAM applications will soon start offering integration with auto-tagging APIs such as Google Cloud Vision, we won’t see high adoption of these technologies from users until the results improve. This could happen either as the API providers get wise to the potential of the DAM market and start listening to what it needs, or when smaller third-party machine-learning experts start filling the gaps. Either way, it’s going to happen – hopefully sooner rather than later!” [Read More]
So, although the prevalence of AI-based integrations might suggest to end users that it’s reached some kind of maturity, I suspect its less than satisfactory performance is going to leave this eager young star on the bench for the time being. When it does reach maturity, it may prove to be one of the biggest game changers in tech history.Share this Article: