The Politics Of Implementing Digital Asset Supply Chains: Parking The Enterprise Service Bus

I am currently involved in writing requirements analysis and specifications for several Digital Asset Supply Chain (DASC) projects.  There are some interesting themes I have observed recently in-relation to the politics of managing their implementation which might be instructive for others planning similar initiatives.

Before I get into the detail of the issues, I will set the frame of reference.  DASC solutions tend to have these characteristics:

  • They employ a service-oriented platform/architecture technology using microservices or SOA. For non-technical readers this means you can use a single feature/capability of one system (like searching, media processing etc) in isolation from all the rest.  Whether or not the vendor will sell just that part to you without bundling a whole load of other stuff you don’t want (or need) is a different question, but there should be no technical reasons why you cannot do this.
  • They involve multiple counterparty systems that send/receive digital assets to/from the DAM. These could be WCM (Web Content Management), PIM (Product Information Management) or creative tools like Photoshop etc plus a whole range of other custom solutions already used by the business.
  • They rely on a core centralised DAM service with an API that is responsible for coordinating activity and is used by everything (but rarely visible to most users).
  • They have multiple interfaces built using ‘API-first’ principles. This means everything users interact with runs through an API (Applications Programming Interface).  One interface can be exchanged for another without having to replace the whole system.

In DASC solutions, it is expected that user needs will change a lot (or ‘evolve’ in DASC-marketing speak) and some fashionable software development terminology like ‘agile’ etc often gets bandied around to describe the process models used for introducing/withdrawing components.

Rather than ditching a legacy system every 5-8 years and enduring the pain of migrating to a new one, the theory with DASC is that users can selectively replace a single part that isn’t fit for purpose any longer.  The implementation cycle is therefore shorter because only a single element is getting replaced and the existing services, interfaces etc remain untouched.

By contrast, conventional DAM products are like warehouses located in the middle of nowhere, so users have to travel to them in order to collect/deposit assets.  DASC solutions are more concerned with getting the required digital assets to/from their destination, hence why the term ‘supply chain’ is used.  Digital assets should flow to whoever needs them, in the same way that drinking water comes out of a tap in your house rather than you needing to carry a bucket to a well and fill it up yourself.

The trend towards Digital Asset Supply Chain solutions has been apparent for some time now.  In addition to the number of times it has been discussed in DAM News, anyone who has been involved in an enterprise DAM implementation in the last few years will have seen the signs which pointed towards it becoming reality.  You tend to hear it discussed far less by vendors because the implications are not wholly positive for them (a point I will return to) but end-users who have some existing in-house human digital asset managers are a lot more up to speed with them (as is often the case in DAM).

There are some underlying trends emerging from recent progress towards supply-chain oriented DAM solutions:

  • ‘Portal’ style DAM solutions of the type described are gradually being usurped by ‘DAM Lite’ offerings. Many ‘DAM Lite’ products are now functionally identical to ‘DAM Full’ circa 2012-2013.  There is price pressure at the bottom and middle end of the market and it will only get more intense.
  • What used to get called ‘Enterprise DAM’ is now also being passed over in favour of DASC solutions as clients with those sort of requirements realise building some monolithic ‘ball of mud’ that is out of date as soon as it has been implemented is essentially a waste of money. Most of the vendors who still promote these tools are now surviving on revenue from support and maintenance contracts rather than new business.
  • The service-oriented nature of platform solutions and the realisation that DAM vendors already have to integrate with lots of ancillary tools anyway (just to deliver a full quota of features like storage, transcoding, search etc) has led to clients beginning to ask why they can’t pick and choose what they want and use different suppliers on a best of breed or case-by-case basis.
  • Some end users prefer not to have this kind of responsibility left at their own door, however, there are also an increasing number of consulting firms (both independent ones and those who act as resellers/channel partners) that will step in and offer to do this on their behalf. So, there is now demand, supply and intermediaries who will connect both together in return for a slice of the action.
  • There are a lot moving parts in DASC solutions. As well as other integrated solutions (like WCM, etc) some DASC initiatives include multiple DAM vendors who are being asked to participate in the same project.  When I talk to vendors who have yet to be involved in DASC requirements, this is met with thinly disguised fear and horror.  My message to them is: get used to it, this isn’t going away.
  • The design of the Digital Asset Supply Chain (and the decisions that lay behind it) now set the agenda far more than any specific technical capabilities.  The implication of this is that just because a vendor might be selected to build-out the first version of a DASC, does not mean they ‘own’ the client for an indefinite period.  The origins of this trend date back to the point at which enterprises decided cloud applications had become sufficiently low-risk enough to move away from requiring internal deployments.  The moves towards digital asset supply chains are, therefore, a logical progression.

Fundamentally, outsourcing is about choice and having the flexibility to decide what battles you fight and which you assign to someone else.  Cloud delivery is effectively outsourcing over the internet so it should not be a surprise that end users want even more flexibility and choice about how they decide to organise it.

In the description of DASC characteristics given earlier, the ‘core service’ referred to above is where the nexus of power in DASC implementations lies.  This has a number of technical descriptions.  The one I encounter most frequently is the ‘Enterprise Service Bus’ (ESB).

To clear up any potential misconceptions non-technical readers might have, this does not refer to a motorised form of mass-passenger transportation.  In IT circles, ‘bus’ generally means a hardware communications protocol that facilitates two devices being connected to each other.  In plainer English it means standardised sockets you can plug stuff into, for example memory extensions, graphics cards and the like.  An Enterprise Service Bus takes the same idea and applies it to microservices APIs.  Instead of hardware, the extensions are cloud software services for DAM systems (transcoding, metadata, searching, zipping large files etc).

In theory, ESBs provide standardisation and a universal method for achieving interoperability across digital asset supply chains.  There are two big problems, however.  The first is that there are currently no defined interoperability standards in DAM currently.  The second problem (closely related to the first) is that DASC solution providers, quickly grasp that whoever controls the ESB has the dominant position and unlike everyone else (whose role is designed to be easily replaced) they are far harder to remove should they prove to be unresponsive or obstructive.  To coin a phrase, they can employ a park the bus tactic and slow down the whole pace of innovation and continuous improvement across an enterprise’s digital asset supply chain if it suits them to do so.

Some cloud vendors (inside DAM and elsewhere) might be dismayed to discover that their natural adversaries, the in-house IT department are often seeing ESBs as a means to re-assert their influence in an era where capital expenditure on IT budgets has been in steep decline.  It should be noted that those same IT departments also point to the leverage that whoever owns the ESB has over the business and assert that it is preferable for an in-house entity to retain that than an external one.  Who is right?  I would contend that neither of them is.  This was never the point of introducing innovations like SOA, microservices and agile methods etc.  The whole idea was to break down power structures, increase competition and deliver better value for money for users, as a result.

Is there anything that can be done?  There is always going to be a trade-off with any environment that depends on trust (which is essentially what centralisation represents).  Although the effects are reduced with cloud solutions when compared with on-premise options, they still share some of the same issues.  A lot of their benefits are implicit and result from the fact that they need be open to support multiple clients.  While they can’t operate closed networks without invalidating their business model, they can operate closed protocols which has the same effect for anything where interoperability is essential (as it is in DASC solutions)

Taken collectively, the DAM industry is a bit like a plot line from a George Romero film, the protagonists fear each other as much as they do the flesh eating undead lumbering around outside wherever they happened to be holed up and often end up succumbing to them as a result. In theory, some kind of delegated or distributed model where different parts of the ESB are held with separate providers could alleviate this trust problem, however, the costs of implementing something like that are generally prohibitively expensive for a use-case like DAM

An obvious and readily available method to provide a protocol that is trustless by design, is a blockchain.  I have covered this at some length in an earlier article.  At this point in time, few in the Content DAM industry think this is relevant to them (with a few exceptions).  I expect more people to eventually come to the same realisations about the limitations of ESBs and cloud implementations of digital asset supply chains as these highly political ‘vendor management’ problems start to become more prevalent and thornier to contend with.

If you are planning a DASC solution and an ESB is involved (which it probably will be, even if you are not necessarily told about it) some points to keep in-mind for a risk management plan include:

  • Who controls the ESB?
  • If you need to replace whoever manages it now, what will the business impact be?
  • Is the ESB core open source or proprietary?
  • What options exist for delegating the ESB across different service providers but without losing the benefits of having a core DAM service?

At this point, my expectation is that many enterprise DAM users haven’t yet got the point where these risks are on their radar, however, based on the kind of work I am doing right now (at least on the implementation management side) they soon will be.  Forewarned is forearmed, as the saying goes.

Share this Article:

Leave a Reply

Your email address will not be published. Required fields are marked *