You are here: Home » Special Features » The Digital Asset Transaction Management System – A Time Machine For Digital Assets

The Digital Asset Transaction Management System – A Time Machine For Digital Assets

This feature article was provided by Ralph Windsor, DAM News editor and is part of our Improving DAM In 2017 series.

This article is a narrative specification for a feature that allows each interaction with a digital asset to be dissected and rolled backwards to any point in time. A technical description would be a digital asset transaction management system. For readers who prefer more poetic terminology (and for the benefit of anyone who may not have understood what all this is about) an alternative title might be a ‘Digital Asset Time Machine’. Unfortunately, the first iteration only goes into the past, however, the data collected as a result might offer some potential for predicting what might happen to digital assets in the future.

Intended Audience
Readers who are not versed in the finer details of DAM software implementation might not comprehend all of the technical discussion points in this article, however, it should be possible for most to understand the benefits offered by the model (especially those who have experience of using different DAM solutions).

What Is This?
The method described is an approach to logging interactions with digital assets in a highly organised and granular fashion. By doing so, it is possible to locate what was done to a digital asset at a given point in time, roll it back, remove a given operation, apply another one or copy the same set of steps to a different asset (or group thereof).

An approximate (but incomplete) analogy would a film strip where the state of a digital asset at a given point in time is like frame (or series thereof) from a moving sequence, which might get cut, edited, re-arranged or modified. Another more contemporary comparison might be the ‘history’ feature of applications like Photoshop where it is possible to revert to a given state and then re-apply another set of actions. Both of those are not entirely satisfactory as explanations of the required behaviour, however, any point in the lifecycle of an asset held within a DAM can be located, manipulated and modified if this method is fully realised.

As alluded to earlier, readers with some technical experience might see some similarities with this functionality and the transaction log provided by some database engines. The main difference is that this activity occurs at a higher-level in the application itself, not the database layer. This necessitates some other components to be present in the DAM software to make the functionality possible and I will describe those later.

Isn’t This Just Version Control?
I have seen some systems which have aspects of what I will describe, but to date, never a complete implementation and certainly not one which fully integrates all the components in a cohesive manner . A number of developers of DAM systems might point to metadata or binary essence (files) version control features their application has, but the required functionality is more sophisticated than that and requires a more rigorous approach to both architecting and developing a solution. Not only must each modification be recorded but it has to be linked to a log of events, each of which has to be composed of API operations that can be repeated (and potentially adjusted if the user wishes).

Why Would Anyone Need This?
The key benefit of this model is the ability to step backwards to any operation carried out to a digital asset in the past, identify who did it, what actions were applied and which other assets (if any) were also affected and then change some aspect of the process used. The event timeline of a digital asset can therefore be edited in a non-linear fashion. Unlike the earlier film strip example, however, what the user sees is the final result or the end-frame of each digital asset (or an interim one in isolation, if that is what is required). In other words, you should be able to step back to a given point in the asset’s history, make a change, re-run all the subsequent events and view the results in real-time. Below are some more common use-cases.

Safe Revert Of Batch Operations
If an erroneous batch update operation was carried out, using this model it is possible to not only revert the modification to a single asset where the initial issue was located, but also see every other asset that was similarly affected, roll-back just that modification from all of them and then re-run all other adjustments carried out afterwards (without needing to remember what they were). With most conventional DAM systems that have version control, reverts are possible, but all the later updates are lost and unless all the assets were tagged with a unique identifier for each batch operation then it is not typically possible to find out what else was affected by a given operation. This method is non-destructive (but as I will describe in the risks section, it might not always be without unforeseen consequences).

Macros & Scripted Actions
Some DAM solutions have scripting facilities where users can compose sequences of operations and re-run them in a standardised manner, but either these require some technical expertise or the operations are restrictive and lack flexibility . DAM solutions with features like macros in spreadsheets or actions in Photoshop are less common and usually require development expertise just to implement basic tasks of the kind which are relatively easy to do in the aforementioned applications.

Using the model described, a macro-recording (and potentially editing) facility for managing digital assets would be easier to implement and this makes it possible for end users to more easily extend their DAM solution without requiring professional services or bespoke customisation. To support this, each user operation would be recorded in the general transaction log, the steps would then be abstracted and stored in a list of actions. For the benefit of non-technical users, ‘abstracted’ means that they could be applied to any asset or group thereof, not just the ones in the original operation where the recording was made. This kind of task is quite easy to do in Excel or Photoshop (and other applications) but it is a lot more demanding for DAM users, even though some (e.g. Digital Asset Managers) are equally as likely to need this kind of sophistication (and probably more so given how much of their work revolves around data management tasks).

User/Asset Audit & Action Associations
Many DAM systems now include a full audit which records every action carried out by a given user. This is at an application-level and includes activities like logins, uploading, editing, searching etc. On a number of systems, it is also possible to change the focal point the audit is based on to another entity, for example using an asset, metadata category or workflow as the basis for viewing events. The auditing activity, however, is normally read-only, i.e. the administrator can see who did what and when, but not actually change anything that happened. A transactional model where audited events can be initiated as well as reviewed (i.e. write, not just read) would make this possible. There are a number of potentially interesting functional possibilities by making audits actionable (as macros) in addition to viewable. One example would be to see if a given user had carried out adjustments to an asset of a similar type (e.g. find/replace some text or re-classifying assets using another field). If the user in question had either misunderstood a given requirement, it would be simpler to find instances where this occurred and adjust just that one misinterpretation across a range of assets and still keep the changes they had carried out afterwards. To take the opposite case, if one user had developed a particular method for treating digital assets (e.g. adding a key item of metadata) their modifications could be applied to other assets supplied by different users who had not followed the same process.

Machine Learning & Automation
I remain a sceptic about the true effectiveness of Machine Learning with complex problem domains (such as those encountered in DAM), however, the significant factor which has allowed this field to become more tenable recently than it once was is the easy access to far larger volumes of user data than was possible in the past. This is also likely to be the key driver in any future developments that might be the result. While I assert that Machine Learning is an oxymoron, automation is an industrial process model which has been in-progress for over two hundred years and certainly does have a history of tangible and proven results. In order to automate more, whatever machine you are designing needs to be able to differentiate between objects at progressively finer levels of detail so the flexibility of the device in question can be enhanced. As such, a transaction model for digital assets that records absolutely everything and allows sequences to be repeated and changed clearly offers a lot of scope for some very powerful and innovative automation and machine learning opportunities, especially when applied to digital assets.

The Implications For New Kinds Of Digital Assets
The use-cases described apply to the current digital assets that most current DAM software users are familiar with, i.e. files containing fixed data like photos, videos, documents etc. that are required for a particular task. The entities referred to in current DAM systems are usually what might be regarded as ‘content digital assets’ . While that currently describes most digital assets currently found in DAM systems, it fails to acknowledge that the scope of digital assets is widening to accommodate a far more diverse range of use-cases than has previously been the case. A more all-encompassing definition of digital assets is that they are collections of related data which both can be uniquely identified and have a potential value (even if you do not know for whom and how much they might be willing to pay). Based on this definition, it is possible to define a wide range of digital assets applicable in a lot of different contexts (and far more than the ones currently found in most DAM solutions). Here are some examples:

Internet of Things (IoT) Digital Assets
The data collected from physical objects is a digital representation of their current state, or a model in other words. Unlike content digital assets which are mostly static, IoT entities are far more dynamic and therefore have potentially a far greater quantity of data. IoT digital assets are analogous to the black box recorders used in aircraft, they encapsulate the entire history of what happened to the associated device and as such offer considerable potential value.

Virtual Reality (VR) Digital Assets
In many ways the same set of benefits from IoT applies to VR digital assets: the state of a virtual entity can be captured and stored at any point in time. Unlike IoT where there is the necessity to source or build suitable physical data capture facilities (and a risk that either these will fail or insufficient data will be collected) VR digital assets are natively digital and therefore it is theoretically possible to collect all data associated with them.

Big Data Digital Assets
Big Data digital assets represent each of the major entities which Big Data technologies need to analyse. In the context of this article, Big Data allows any of the previous representational states of these entities to be analysed and models built which allow hypothetical scenarios to be tested more easily. Each entity is therefore a digital asset since it contributes to the wider asset value of all the other (the ‘whole being greater than the sum of the parts’ argument).

New Digital Asset Types And Their Relationship With A Digital Asset Transaction Manager
If there a generic transaction model for recording the state of these new digital assets then it becomes possible to see what state they were in at a given point in time, roll backwards and test the result if different parameters were supplied on a subsequent operation. This offers considerable scope for modelling the physical or virtual world, in addition to any new digital assets yet to be invented.

While the effort involved in implementing this model might not seem entirely worthwhile just to see what metadata changed for some photos or videos that were uploaded to a DAM several years ago, being able to view how a physical object that has been mapped to a digital asset using IoT technologies affords some very powerful visibility and analytical opportunities which have not been available hitherto.

That these capabilities will not be in-demand from users in the medium term (or sooner) seems highly unlikely. Digital assets offer a design template to support their analysis and manipulation. Further, there is a body of widely held knowledge and expertise about how to manage which can be re-used. DAM vendors who can see how these concepts fit together might be starting to have a better understanding of where they can earn a lot more revenue out of DAM than has been on offer in the past with the management of content-oriented digital assets alone.

Key Architectural Components
Having established the benefits of a digital asset transaction management sub-system, I will now consider what might be required to realise it. Here is a high-level list of the key components:

  • An asset/transaction audit trail
  • An API which all user operations are routed through (API-First)
  • A log of all API operations each transaction was composed of
  • A version control system for all asset data

An Asset/Transaction Audit Trail
As described earlier, an audit trail is an essential component for this model to be effective. For example, if an asset record is modified, this must be logged along with the date/time, details of the specific user responsible and a node containing data about what was done. Some DAM applications also offer this, but from an asset-centric perspective (as well as the user audit). This is required for the wider model to be feasible since every event which changes an asset needs to be linked to the affected asset(s). Each operation must be recorded with a unique identifier and include a reference to the event that preceded it in chronological order.

A transaction might get rolled back (e.g. revert to a previous state) but the log of events is never truncated. Readers who have some prior knowledge of blockchains (or the accounting/bookkeeping theory that underpins them) should be familiar with the concepts described. In accounting systems (whether paper or electronic) ledger entries are never deleted or modified, only appended. This has the advantage of non-repudiation (i.e. the transaction sequence cannot be altered) and it is an essential attribute for this model to be effective.

There is a potential argument for using a public blockchain for a transaction ledger, although this would incur costs (to create the blockchain entries by buying fragments of coins which can be used to create transactions) and the performance could become an issue also. This is something that would need to be investigated as part of an implementation exercise. In the first iteration, having an application-specific transaction log would be a reasonable starting point with some consideration given to integrating them with public ledgers at a later date. The potential interoperability benefits of this model might be easier to realise using a shared ledger (and it also avoids arguments between vendors about whose protocol should take precedence). This is a point I plan to return to in other articles.

An API Which All User Operations Are Routed Through (API-First)
The majority of DAM solutions now also include an API, but in most cases, this has been retro-fitted after the fact rather than being an integral component right from the start. The upshot is that the user interface was created first and then the API was provided afterwards, i.e. ‘API-Second’. As such, not all operations which human users are able to carry out are always accessible to API clients (i.e. other applications). The best way to address this is for a system to use a method called ‘API-First’, this means that everything, including the user interface routes through the API. This forces the developers of the solution to create API operations for anything a user needs to do and better ensures the API is at least as flexible and comprehensive as the user interface. Using API calls rather than some internal-only method for recording user activity is important because it means users, third party applications and the system itself are all using the same protocol and there are no issues about translating from one to the other or dealing with missing functionality from one method as opposed to another.

A Log Of All API Operations That Each Transaction Was Composed Of
Each transaction may use one or more API calls. These need to be stored in the sequence they were carried out and associated with the transaction audit (a kind of ‘stack’ data structure as software developers sometimes refer to it). Each API operation should be a simple text representation which can be forensically analysed later either by a human being or an automated process (or both). The parameters used to drive the API calls also need to be abstracted and stored independently. This means that one or more transactions can be re-run, but with different data, if required.

A Version Control System For All Asset Data
Any changes to the digital asset’s data (whether metadata or the essence) should be versioned and each iteration needs to reference the asset transaction audit entry.

To realise this model it is not sufficient to build each of these components in isolation, each element needs to be designed to interact with all of the others. This makes the task more demanding than it might first appear, this is a point I will address in the following section about risks.

Below is a very approximate diagram of how the components might fit together. This would clearly need further development work, however, it should illustrate the main ideas involved and the interaction between the entities.

Risk Analysis
This kind of feature will present risks, not only during implementation, but also on an on-going basis. These need to be assessed and mitigated (where possible). The following are some more obvious ones, a more detailed analysis will be required prior to implementation as part of a project plan.

  • Transactions might not be granular enough
  • Unexpected consequences from altering digital asset history
  • De-stabilising platforms as a result of retro-fitting missing components
  • Migration-related data loss and/or partial migration risks
  • Note that these are technical in nature, the commercial risks are not insubstantial either, but those are outside the scope of this discussion.

Transactions Might Not Be Granular Enough
Each of the transactions that are carried out on digital assets have to be treated as self-contained operations so they can be rolled back. These need to be at a higher level and correspond to user interface functions so users can relate to them. This could potentially create problems because many operations might need to be combined in a single transaction and the ability to adjust just a single aspect might not feasible because the key element that has to be changed is part of a wider group of modifications that need to be retained.

This risk could be mitigated by breaking down larger transactions into smaller groups, but users then need to understand that a transaction might be a composite (or be made aware somehow). A further alternative would be to allow users to dissect transactions and partially roll them back (e.g. up to a given stage). Making decisions about how the extent to which this should be permitted might need to be made on a case-by-case basis. Applications architects and product managers should anticipate being asked a lot of tricky questions from their colleagues responsible for coding this about where to draw the line in the sand for transactions (and what the effects might be as a result).

As with all risk management considerations, the mitigation method may also introduce risks of its own and these need to be thought through with great care prior to implementation. A certain amount of prototyping might be appropriate and a very detailed test plan needs to be drawn up at an early stage. While agile methods have their uses, the roadmap for building all this needs to be clearly defined from the get-go (along with a credible route for navigating it).

Unexpected Consequences From Altering Digital Asset History
This was described earlier in the benefits section. It is a classic issue with any kind of time-machine concept (and the plot device for numerous books and films where they feature). If you change some historical event (even a very minor one) this might generate unforeseen consequences which are revealed later. This issue is likely to be encountered most frequently with batch operations. For example, if the user reverts a batch operation to add a particular metadata attribute to many assets, they might not subsequently get found for a different batch operation which may in turn mean they don’t get included in another one. The result might be significantly different to what the user thought would happen and this could impact other digital assets. I gather this effect is sometimes referred to as a ‘combinatorial explosion’ and care is required to ensure that interfering with the prior state of a collection of digital assets doesn’t accidentally result in blowing them all up.

To an extent, the fact that everything is tracked and can be reverted (even the reverts themselves) helps mitigate the risk described, but if the changes were quite subtle they might not noticed for a period of time. This risk is virtually impossible to mitigate against entirely and (in this case) cure might be simpler than prevention. The most practical method I can think of for dealing with it in-advance would be to try to give the user some predictive information about what might occur if they did make the proposed amendment. If the consequences were undesirable, they could choose to use a more conventional method rather than rolling back an earlier adjustment and re-applying it (i.e. do whatever they do now). The DAM application would be required to help users assess the impact of their changes to the historical state of digital assets and workflow rules, permissions etc would all be essential to prevent accidental damage to digital assets.

De-Stabilising Platforms As A Result Of Retro-Fitting Missing Components
It would be fair to say that most DAM solutions were never originally built with this kind of sophistication in-mind (especially as the original use-case was often a superficially simple need to manage a relatively small catalogue of photos or videos etc). In most cases, one or more of these components will need to be not only retro-fitted but made to integrate with both the others and existing functionality. This is likely to be a substantial amount of work for DAM system developers (not to mention product managers and solution architects) and should not be underestimated. Any kind of fundamental update to the core of a complex application risks bugs being introduced and adds to the maintenance overhead for developers.

To mitigate this risk, an easier option might be to wholesale change the architecture of the underlying system, which is even more work but reduces the potential for faults as all of the internals will be replaced so everything can be tested. All this involves greater cost, however and generates a corresponding risk which I will address next.

Migration-Related Data Loss And/Or Partial Migration Risks
All existing digital assets will need to be re-loaded so they have base records and attributes for any operations that might be carried out on them later. In addition, if the solution uses an audit trail already, all those events need to be retained as they have potential value from an asset usage analysis perspective. Some data loss is almost impossible to completely avoid with migrations and what most DAM developers and users settle for is ensuring that most of the currently useful metadata is retained. If the architecture of the underlying system changes significantly, even though the new edition can be made to look superficially more or less identical to the old one, it will feel a lot like a data migration exercise for the personnel involved in moving from pre-transactional model to the new transactional one. Further, users might be surprised if data appears to have gone missing etc and since the old edition may look and feel like the new one, they may simply regard this as the system becoming more bug-ridden than it used to be or question the necessity for this work.

The best mitigation for this risk is to plan for it right at the beginning and where possible to try to re-engineer the loading of existing assets as though they used the new transactional architecture, even though they might pre-date it by some years. This improves the compatibility between the old method and the new one by adding as much transactional metadata to each digital asset as possible.

Conclusion
As discussed in the preceding section, the digital asset transaction management system described in this article is not at all without risks and further the amount of work required to implement it will be considerable. Further, this article is not a ‘blueprint’ that provides all the required details to actually develop the model. The intention is only to show what might be possible and give some guidance as to the major elements and how they might interact. I fully expect there to be flaws in this design which anyone planning to utilise it in a real product will have to resolve (and within the constraints of their chosen technology stack as well as any previous application design decisions they may have made).

With that said, I believe the by-products of implementing the model offers a huge amount of potential for developing innovative new features which DAM users will find of value. While the development work is far from trivial, it is conceptually straightforward to understand and has clear benefits that can start to generate ROI as soon as implementation is completed and the modifications are deployed.

 

This feature article was provided by Ralph Windsor, DAM News editor and is part of our Improving DAM In 2017 series.


{ 2 comments… read them below or add one }

Tim Strehle March 9, 2017 at 10:23 pm

Hi Ralph,

that’s a pretty cool idea! I’m not sure I’d want to be the developer who has to implement it… But it would be a great feature for the user who finds out that something went wrong during a batch update two months ago.

A subset of your proposal would be to implement “Undo” first, which should be simpler. I think “Undo” (together with “Redo”) is an important feature, taken for granted in desktop applications but very rare in Web apps.

Related, and also neat: The “Memento framework” https://mementoweb.org/guide/rfc/ which would allow users to browse DAM data as it looked like at some point in the past. (Via Martynas Jusevicius’ tweet: https://twitter.com/namedgraph/status/837251272063475712 )

Regards,
Tim

Ralph Windsor March 10, 2017 at 9:50 am

The implementation for this would be tricky, although probably not so bad if there was the opportunity to build it right from the start. This is like a lot of software development conundrums, the hard part isn’t so much the feature itself as integrating it with everything else (and realising that some prior design decisions were not the best ones on offer).

This is not an endorsement, but I believe Southpaw’s TACTIC has multiple levels of undo (not just one). I gather they are fairly production-oriented and you tend to find these apps have more sophisticated features (although they’re also not always the easiest to use and a good section of more conventional DAM users might not find a lot of the capabilities to be of much value).

As well as the undo aspect, I think it’s the API operations stack also and the ability to view a timeline for a digital asset (and the changes made to it). Very few DAM solutions offer scripting and macro support either. These are possibly features most users wouldn’t do a lot with, but then again, ‘most DAM users’ download a logo and a picture of the CEO once or twice, while a far smaller number spend all day plugged into it (and ensure that the requisite logo and picture of the CEO is in the thing to start with and can be found). Both these groups need to be catered for and there’s a symbiosis between the two.

There’s an interoperability advantage with this also. The apps that use something like this would also be a lot easier to integrate because it becomes a case of converting one transaction into the schema required by the other. There is a ready-made ‘unit of interaction’ which is standardised so it’s a bit easier to work out the differences and resolve them (like comparing two packets of data). I think this is the main point with all this, you need some kind of conventions for describing every user interaction, even if it’s only present (to start with) in one application – that’s probably the underlying message of the whole piece.

The Memento framework looks interesting, I’ll have to check that out.

Leave a Comment