You are here: Home

Recently, CMSWire published an article I wrote for them: What it Will Take for Artificial Intelligence to Become Useful For DAM.  This was an abridged edition of a longer feature article for DAM News: Combining AI With Digital Asset Supply Chain Management Techniques.  The responses I have received or read seem to be generally favourable, although some people thought it was critical of AI (and related technologies).  The point of the item was more about how to get AI to deliver some tangible results for DAM (i.e. ROI) rather than it just being a gimmicky toy that users disable because the results are not entirely trustworthy (which is what happens in the case of many corporate and public sector DAM users I deal with).  There must be a far higher burden of proof that it can deliver ROI and some risk management practices are needed to ensure that casually tossed around statistics, aphorisms and pithy quotes are not used as a weak substitute for a solid business case.

When it comes to practical applications of AI to DAM (and this applies to all the aspects of it, not just image recognition) one statistic I frequently read is that it is ‘80% effective’.  This is an example of how to bend the truth with superficially hard numbers which tend to become rather softer once you prod them around a bit.

To illustrate this consider the following two different scenarios.  If you scored 80% in a maths exam, that sounds like a great result which would attain you somewhere close to the top grade in many educational institutions.   Let’s now a take an alternative case.  You switch on your computer and it doesn’t work, you try again and it does the next time.  For the rest of the week, it works perfectly without any problems.  The following week, it doesn’t work again, you switch it on once more and it still doesn’t start, you repeat this two more times before finally giving up and calling for some assistance from an engineer.  They try and there is no problem, it works perfectly for them (as is the unwritten law of the universe when it comes to reporting or investigating technical faults).  It continues to work exactly as it should on eight more occasions, then it stops working.  You try again, now it’s fine.  The following day, it doesn’t work, you start it again and it does.  For two more attempts, it doesn’t switch on, then two more it does, followed by six more when it does and then it stops again.  You declare the computer ‘faulty’ and decide to use a different one because of its ‘reliability issues’.

The above is another 80% success rate.  What would get you an ‘A’ in your maths test translates to extreme frustration and irritation with a piece of equipment with the same score.  This is why these kind of stats require a great deal of careful handling.  As anyone with a project management (or operations management) background will understand, in reality ‘80% success’ really means a risk of an unfavourable outcome 20% of the time.  In practical terms, initiatives need to be managed with risk as the top consideration because the consequences of can be devastating and undo all the investment, time and effort accrued so far.  A 20% risk is quite large (or ‘expensive’, to use an alternative term).  If you had a financial investment yielding that kind of return in 2017, the kind of question you might reasonably ask is ‘might the counterparty be about to go bankrupt?’.  Percentages (like any other form of metadata) require context to assess their true value.  The expression ‘lies, damned lies and statistics’ is said for a very good reason.

There are several points to emerge from the previous discussion which anyone thinking about applying AI should consider:

  1. Is there any real quantitative data about the success of a given AI technology?  An observation I have made before is that a definition of Artificial Intelligence is that it is computer software that you cannot test properly.  Before an AI tool is used, some real data is required that shows what results are being achieved in reality (i.e. data that is used to drive business decisions).  This needs to be collected and assessed independently, not by the people who are selling it (for all the obvious reasons).
  2. A risk assessment is required about what threshold is considered acceptable.  This needs to be more than a few people sat around a table plucking numbers from thin air (‘oh about 90% should be OK’ etc).  There must be facts and figures to support them too.  The topic of risk management is outside the scope of this article, but whoever does this analysis, needs to have had some training for (and real experience of dealing with it on actual implementation projects).  There is a correlation with quality management here also (as most project managers will be aware).  If the target quality is lower, the risk is diminished, but you need to know why exactly a quality level has been chosen.  For example, one of the use-cases I have seen for AI and DAM is tagging user generated content like photo competitions.  This is reasonable, but are there boundaries to prevent this material being downloaded and used for other projects (e.g. marketing campaigns)?
  3. Below a certain threshold, AI tech should only be used in either an advisory capacity (i.e. suggest descriptions, keywords etc) or possibly as a secondary search corpus if no results are found.  In this case, AI is less ‘Artificial Intelligence’ and more ‘Augmented Intelligence’, i.e. it can provide a potentially useful  fresh perspective, but you wouldn’t trust it exclusively.  Decide early on if what you actually want to achieve is the latter interpretation of AI, this will be less risky to implement, but deliver a corresponding lower ROI.
  4. If the threshold is not high enough but Augmented Intelligence is not sufficient and you still think there might be some methods to use it , what are options are there for the tools to improve and get better?  Most people assume that AI and machine learning are synonymous, but like other IT myths (e.g. ‘Cloud servers are always redundant and fail-safe) it is not necessarily true.  Most of the commodity AI tech I have examined lacks any ability to learn, someone has to custom-implement this.

One over-arching theme which seems to consistently occur with AI is how the success rate increases significantly the most specialised and focused the subject domain becomes.  Generic tools that try to be everything to everyone usually seem to produce lower quality results.  There is a lot of press about AI technology beating people at chess, learning languages etc but when often a fairly large team of human beings was also employed and the goals were highly specific and very well defined.

In the conventional (non-AI) software world, this activity would get given descriptions like ‘custom development’ or ‘professional services’.  There isn’t anything wrong with this, but these days, most organisations have grasped that they need to be quite selective over how much they get involved in due to the potential costs and risks of these kind of projects (as measured against the value obtained from them).  To get very good results from AI that are sufficient to allow you to replace human intelligence with an AI equivalent, the exercise will become like a custom development project.  This used to be de rigueur for DAM software until around 12-15 years ago and implementing effective AI will involve going back to that model, at least for a while.  If that isn’t something you can afford (or simply do not find palatable) then you may have to consider more the ‘augmented’ side of the spectrum which makes the ROI case a bit less clear cut than is currently being presented.

I believe there is still a great deal of potential for low-risk efficiency improvements and cost-savings with AI when it is combined with the digital asset supply chain techniques discussed, however, these approaches are far less groundbreaking or futuristic than many on the sell-side of DAM are currently prepared to admit.  AI tools are never going to be 100% effective.  The safest (and cheapest) way to use them is quite sparingly and for use-cases where they have provable value so you don’t end up with a complicated and expensive mess.  Simply aligning and organising your systems and processes better will produce most of the benefits currently claimed for AI (and be essential anyway before you can get anything useful it).  As with other aspects of DAM technology, finding out that there might be some up-front work to do before the benefits can be realised is not what many end-users will want to hear, but it is still true.  Those who tell you otherwise are either being disingenuous or may not have a lot of demonstrable experience of delivering Content DAM solutions.


Blockchains For Content DAM: From Myth To Reality

December 6, 2017 Digital Asset Management Value Chains

On our features section, I have written article: Blockchain And Content DAM: Myth, Reality And Practical Applications. “Recently, I have read a few articles which make some critical remarks about blockchain for DAM.  There are some reasonable points advanced, however, there are also misunderstandings and myths about what blockchains are and how they might be applied […]

Read the full article →

Brandworkz Announce Release 8.1

December 5, 2017 Vendors

DAM solutions provider Brandworkz have announced a new version of their cloud-based brand management software, which claims to focus on “providing better asset visibility and business intelligence”.  Although 8.1 is only a minor version upgrade, it looks as though they’ve undertaken some fairly extensive refactoring. The full list of new features is as follows: Full […]

Read the full article →

IntelligenceBank Launch ‘Lightning-Fast’ Search

December 5, 2017 Vendors

Last week, IntelligenceBank announced the launch of its ‘Lightning-Fast’ search, as part of its 3.0 upgrade.  According to their CEO Tessa Court: “With this upgrade, our backend infrastructure is 15x faster than other platforms, and we have literally brought a desktop computing experience to the cloud” [Read More] We have covered this type of press […]

Read the full article →

DAM News Vendor Research Toolkit Offer

November 28, 2017 Vendors

From today until 31st December 2017, we are offering a combined package of our vendor research report and pricing survey for the all-in price of $499.  This includes: Vendor features and benchmarking guide (13,500 words over 61 pages) Strategy selection report (20,000 words over 50 pages) 2016 vendor pricing survey (both original data in spreadsheet […]

Read the full article →

DAM and AI Growing Pains: Ingestion or Indigestion?

November 24, 2017 Digital Asset Management

VP for Nuxeo, Uri Kogan, has recently published the penultimate article in his 7 Common Beliefs in the DAM Market series. For this post, Uri turns his focus to that ubiquitous herald of the new age, Artificial Intelligence (AI) and more specifically, its potential uses within Digital Asset Management. It’s undeniable that AI and machine-learning […]

Read the full article →

5 Brand Maxims for Digital Marketers

November 21, 2017 DAM For Marketing

Anna Cotton, Head of Marketing for Brandworkz, has recently posted an article outlining a number of tips for raising and consolidating brand awareness across your business.  ‘How to get the most value out of your digital asset management platform’ provides a simple entrée into the benefits of adopting best-practice brand management across an organisation. The […]

Read the full article →

New Member Of DAM News Writing Team

November 21, 2017 Industry News

I’m pleased to report today that we have a new staff writer joining us on the DAM News team.  My colleague from Daydream, Charles Russell will be taking over primary responsibility for the more news-related content so I can concentrate on feature articles and general editorial responsibilities.  All general editorial enquiries can continue to be […]

Read the full article →

Insight Exchange Network (IEN) Digital Assets & Content Leadership Exchange​ – New York, January 22-24

November 14, 2017 Industry Events

Representatives from Insight Exchange Network (IEN) contacted me recently to ask us to inform DAM News readers that they are holding a DAM-related conference early next year on January 22-24 in New York entitled: Digital Assets and Content Leadership Exchange.  From the publicity materials: “As content velocity increases and the volume of digital assets grows […]

Read the full article →

DAM News Digital Asset Manager Salary Survey – Request For Participation

November 6, 2017 Industry News

Longer-term DAM News readers might recall that the (now defunct) DAM Foundation used to run an annual salary survey where digital asset managers (i.e. those responsible for administering collections of digital assets) were invited to provide details of their remuneration package so that some analysis could be carried out into industry-wide trends.  I have been […]

Read the full article →