You are here: Home » Artificial Intelligence » Understanding The AI ‘80% Problem’ And Deciding Between Artificial Or Augmented Intelligence

Understanding The AI ‘80% Problem’ And Deciding Between Artificial Or Augmented Intelligence

by Ralph Windsor on December 15, 2017

Recently, CMSWire published an article I wrote for them: What it Will Take for Artificial Intelligence to Become Useful For DAM.  This was an abridged edition of a longer feature article for DAM News: Combining AI With Digital Asset Supply Chain Management Techniques.  The responses I have received or read seem to be generally favourable, although some people thought it was critical of AI (and related technologies).  The point of the item was more about how to get AI to deliver some tangible results for DAM (i.e. ROI) rather than it just being a gimmicky toy that users disable because the results are not entirely trustworthy (which is what happens in the case of many corporate and public sector DAM users I deal with).  There must be a far higher burden of proof that it can deliver ROI and some risk management practices are needed to ensure that casually tossed around statistics, aphorisms and pithy quotes are not used as a weak substitute for a solid business case.

When it comes to practical applications of AI to DAM (and this applies to all the aspects of it, not just image recognition) one statistic I frequently read is that it is ‘80% effective’.  This is an example of how to bend the truth with superficially hard numbers which tend to become rather softer once you prod them around a bit.

To illustrate this consider the following two different scenarios.  If you scored 80% in a maths exam, that sounds like a great result which would attain you somewhere close to the top grade in many educational institutions.   Let’s now a take an alternative case.  You switch on your computer and it doesn’t work, you try again and it does the next time.  For the rest of the week, it works perfectly without any problems.  The following week, it doesn’t work again, you switch it on once more and it still doesn’t start, you repeat this two more times before finally giving up and calling for some assistance from an engineer.  They try and there is no problem, it works perfectly for them (as is the unwritten law of the universe when it comes to reporting or investigating technical faults).  It continues to work exactly as it should on eight more occasions, then it stops working.  You try again, now it’s fine.  The following day, it doesn’t work, you start it again and it does.  For two more attempts, it doesn’t switch on, then two more it does, followed by six more when it does and then it stops again.  You declare the computer ‘faulty’ and decide to use a different one because of its ‘reliability issues’.

The above is another 80% success rate.  What would get you an ‘A’ in your maths test translates to extreme frustration and irritation with a piece of equipment with the same score.  This is why these kind of stats require a great deal of careful handling.  As anyone with a project management (or operations management) background will understand, in reality ‘80% success’ really means a risk of an unfavourable outcome 20% of the time.  In practical terms, initiatives need to be managed with risk as the top consideration because the consequences of can be devastating and undo all the investment, time and effort accrued so far.  A 20% risk is quite large (or ‘expensive’, to use an alternative term).  If you had a financial investment yielding that kind of return in 2017, the kind of question you might reasonably ask is ‘might the counterparty be about to go bankrupt?’.  Percentages (like any other form of metadata) require context to assess their true value.  The expression ‘lies, damned lies and statistics’ is said for a very good reason.

There are several points to emerge from the previous discussion which anyone thinking about applying AI should consider:

  1. Is there any real quantitative data about the success of a given AI technology?  An observation I have made before is that a definition of Artificial Intelligence is that it is computer software that you cannot test properly.  Before an AI tool is used, some real data is required that shows what results are being achieved in reality (i.e. data that is used to drive business decisions).  This needs to be collected and assessed independently, not by the people who are selling it (for all the obvious reasons).
  2. A risk assessment is required about what threshold is considered acceptable.  This needs to be more than a few people sat around a table plucking numbers from thin air (‘oh about 90% should be OK’ etc).  There must be facts and figures to support them too.  The topic of risk management is outside the scope of this article, but whoever does this analysis, needs to have had some training for (and real experience of dealing with it on actual implementation projects).  There is a correlation with quality management here also (as most project managers will be aware).  If the target quality is lower, the risk is diminished, but you need to know why exactly a quality level has been chosen.  For example, one of the use-cases I have seen for AI and DAM is tagging user generated content like photo competitions.  This is reasonable, but are there boundaries to prevent this material being downloaded and used for other projects (e.g. marketing campaigns)?
  3. Below a certain threshold, AI tech should only be used in either an advisory capacity (i.e. suggest descriptions, keywords etc) or possibly as a secondary search corpus if no results are found.  In this case, AI is less ‘Artificial Intelligence’ and more ‘Augmented Intelligence’, i.e. it can provide a potentially useful  fresh perspective, but you wouldn’t trust it exclusively.  Decide early on if what you actually want to achieve is the latter interpretation of AI, this will be less risky to implement, but deliver a corresponding lower ROI.
  4. If the threshold is not high enough but Augmented Intelligence is not sufficient and you still think there might be some methods to use it , what are options are there for the tools to improve and get better?  Most people assume that AI and machine learning are synonymous, but like other IT myths (e.g. ‘Cloud servers are always redundant and fail-safe) it is not necessarily true.  Most of the commodity AI tech I have examined lacks any ability to learn, someone has to custom-implement this.

One over-arching theme which seems to consistently occur with AI is how the success rate increases significantly the most specialised and focused the subject domain becomes.  Generic tools that try to be everything to everyone usually seem to produce lower quality results.  There is a lot of press about AI technology beating people at chess, learning languages etc but when often a fairly large team of human beings was also employed and the goals were highly specific and very well defined.

In the conventional (non-AI) software world, this activity would get given descriptions like ‘custom development’ or ‘professional services’.  There isn’t anything wrong with this, but these days, most organisations have grasped that they need to be quite selective over how much they get involved in due to the potential costs and risks of these kind of projects (as measured against the value obtained from them).  To get very good results from AI that are sufficient to allow you to replace human intelligence with an AI equivalent, the exercise will become like a custom development project.  This used to be de rigueur for DAM software until around 12-15 years ago and implementing effective AI will involve going back to that model, at least for a while.  If that isn’t something you can afford (or simply do not find palatable) then you may have to consider more the ‘augmented’ side of the spectrum which makes the ROI case a bit less clear cut than is currently being presented.

I believe there is still a great deal of potential for low-risk efficiency improvements and cost-savings with AI when it is combined with the digital asset supply chain techniques discussed, however, these approaches are far less groundbreaking or futuristic than many on the sell-side of DAM are currently prepared to admit.  AI tools are never going to be 100% effective.  The safest (and cheapest) way to use them is quite sparingly and for use-cases where they have provable value so you don’t end up with a complicated and expensive mess.  Simply aligning and organising your systems and processes better will produce most of the benefits currently claimed for AI (and be essential anyway before you can get anything useful it).  As with other aspects of DAM technology, finding out that there might be some up-front work to do before the benefits can be realised is not what many end-users will want to hear, but it is still true.  Those who tell you otherwise are either being disingenuous or may not have a lot of demonstrable experience of delivering Content DAM solutions.

Related posts:

    no matches

{ 1 comment… read it below or add one }

Andrew Lomas January 3, 2018 at 10:49 pm

Hi Ralph

Another great commentary piece and I appreciate and side with you on the Augmented Intelligence use case when it comes to practical integrations with AI platforms and DAM.

One area I see having some benefit using AI (for tagging assets) is in the area of initial data migration forma legacy folder based system to a DAM. often there is a big mental and time consuming barrier to getting started with DAM and using AI services such as Clarfai can speed up the process. Admittedly, it wont apply business taxonomy tot he assets but its at least a start with some serviceability with search becoming available immediately without the need for extensive manual metadata tagging.

I’d be interested to know more about how you have approached mass migration from file based systems and if you have any tips or insights as to how to best overcome the objections using process and tools.


Leave a Comment

Previous post:

Next post: