The Role Of Taxonomy Governance In DAM Interoperability Initiatives
Over the last few years, while working with enterprise clients, a common theme has been emerging in relation to their corporate taxonomies and the governance strategies applied to them (or lack thereof in both cases, as I will explain). The catalyst for this has often been integration between their DAM solution and another system used elsewhere in the business.
The background to most scenarios is usually this: the application owners have read (and usually implicitly understand anyway) that having a ‘silo’ DAM disconnected from the rest of the business is limiting the ROI they can obtain from it by forcing cost-duplication in the form of staff time to manually input data twice or paying for features that they already have in other products. The obvious answer is an interoperability initiative, either a one-off or some wide exercise covering many systems. These activities tend to have four stages, which you can roughly summarise as: fine words, complex decisions, hard work and difficult compromises. This is the reason why many end up not getting much further than the fine words. As I will discuss, if some strategic metadata management decisions are made in advance, the delivery process might be somewhat less painful and complex. To explain this, it is necessary to understand what happens once the strategic vision has to get transformed into implementation tactics.
Firstly, some definitions: interoperability tends to refer to the theoretical aspect of the exercise, the practical is what most technical personnel call ‘application integration’, or just ‘integration’. One of the common issues that will nearly always present itself in integration implementation is what software/data people tend to call ‘mapping decisions’ (or similar terminology). These usually come into play with fields that are sourced from a pre-defined list, such as a taxonomy or other controlled vocabulary. In simple terms, it means understanding that one system refers to a category or class of data as ‘x’ whereas the counterparty refers to it as ‘y’. Human beings can usually quickly grasp that the two entities are synonymous, i.e. the same thing, but software needs to be explicitly told that x=y. This is usually easy enough to work out rules for which developers can then write code to automate.
A snag occurs when there is not an equal number of values for the same entity across both systems. If the party receiving the metadata has fewer choices than the one sending it then this is where the ‘difficult compromises’ stage of the exercise can come to the fore. To explain this, if System A has a list of colours: red, green, blue, yellow and purple, but System B (which will receive the data) only has red, green and blue, then what do we do with yellow and purple? One option is to add the missing items to System B, but this can be where metadata theory runs into the brick wall of IT feasibility. If System B lacks any facility to update the particular item of metadata we are interested in, then changes will have to be made to that solution. Making these might introduce other issues, e.g. having to modify training materials from System B, verifying that users don’t start making inappropriate colour choices, reports becoming more complex than they used to be, unexpected bugs etc. The range of implications might be inconsequential or it could be significant: it is hard to assess this in advance.
The effects are analogous to the zen proverb about throwing rocks into pools of water which generates ripples and then attempting to suppress those creates even more of them. Integration exercises create a form of chaos that iteratively needs to be bought back into order again and this can be more demanding than many organisations have bargained for. With some justification, those responsible for maintaining counterparty systems (especially legacy applications) get nervous when integration projects come their way because they anticipate them becoming a destabilising influence which will require lots of effort and cost to manage. This is one of the biggest drags that prevent interoperability initiatives becoming reality. Can you avoid them? If so, how? While you cannot completely escape these challenges, many issues (especially those relating to DAM) stem from not having some form of enterprise-wide taxonomy governance procedures which are adopted and actively used by the whole business.
To offer a real-world example of how this problem can manifest itself, consider the following paragraph. Many larger organisations are composed of multiple business units (e.g. ‘group’ companies). An issue which seems to be commonplace is widespread disagreement among them about how many business units exist, their names and who has responsibility for creating them (or at least metadata which represents them) and what the hierarchical relationships there are between each business unit. Often, application owners will introduce systems and then either they (or their users) will create their own business units with unofficial names. Alternatively, they might refer to another business unit by terminology that is out of date or just plain wrong. Completely failing to acknowledge that a given business unit even exists is a further issue, especially if it happens to be in a different geographical region or recent acquisition etc. In short: everything tends to be biased towards the prejudices and preconceptions of whoever has paid the bill for a given IT system. Anyone who works for a larger organisation (or who has them as clients etc) will be well aware of what I am describing, this happens the whole world over.
I have come across a very small number of firms that have now understood this issue and who are developing their own corporate taxonomy governance programmes where key entities like business units, product/service names etc all formalised, including a management process for approving updates. The two key issues they seem to be encountering are a lack of adoption stemming from a lack of awareness that the taxonomy exists and also the rules not being adhered to and application owners making their own ad-hoc extensions or adjustments. I suspect as interoperability becomes more in-demand and the cost implications of deviating from approved vocabularies becomes more apparent, those might reduce (albeit the process will be slow and arduous). In a follow-up article, I plan to discuss the topic of taxonomy adoption and it must be said that this can be even harder to achieve than DAM adoption, especially because there is rarely a tangible product that prospective users (aka taxonomy consumers) will directly experience.
With that said, if you can overcome these hurdles, having an enterprise taxonomy with a defined process for managing this kind of data (the ‘governance’ part) can save a huge amount of investment capital when it comes to implementing interoperability, in particular for Digital Asset Management, but across a lot of other application classes too. Not only is there a cost-saving, but also an opportunity to gain greater insight into activity across the business at a more forensic level of detail (which can be audited and analysed). For sure, the hard work aspect of delivering reliable application integration won’t go away, but the complexity and difficult compromises are both likely to reduce if there is an existing internal schema which everyone knows about, uses and adheres to.
If you are contemplating commencing a new DAM initiative or considering interoperability options for your current one, looking first at your organisation-wide metadata governance procedures and seeking to connect more applications into that activity could be a worthwhile exercise that might help reduce both the cost and complexity.
Further reading
I consulted a few different resources to write this article and recommend some others for readers who are interested in the subject, those are all listed below:
- Establishing a Taxonomy Governance Team (Strategic Content)
- Introduction to Taxonomy Governance (Strategic Content)
- Taxonomy Governance (Heather Hedden: The Accidental Taxonomist blog)
- Inform, Transform & Outperform: Digital Content Strategies To Optimize Your Business For Growth (John Horodyski)
- Metadata for Content Management: Designing taxonomy, metadata, policy and workflow to make digital content systems better for users (David Diamond)
Share this Article:
My feeling is this message is starting to get through. We are seeing an increasing number of enquiries from prospects who want to know if our product integrates with third-party taxonomy solutions (such as Synaptica), indicating that they see taxonomy planning as something bigger than just their DAM application.
We have long seen requirements from clients to integrate with internal PIM (Product information management) applications, as this is information that changes often and discrepancies/inaccuracies can have serious consequences.
Once we’re all using microservices-based architectures ;-) all this becomes easy. Instead of viewing a DAM application as a discrete entity that may-or-may-not have its own way of dealing with taxonomies, we see applications as user interfaces that support specific business processes, making use of multiple enterprise microservices – one for the taxonomy, one for PIM, a few for DAM functionality, etc.