You are here: Home » Digital Asset Management Value Chains » Digital Asset Management Value Chains – Metadata

Digital Asset Management Value Chains – Metadata

by Ralph Windsor on January 18, 2013

This is one of several articles about The Digital Asset Management Value Chain and how it might shape the future of DAM.  The feature article introduces the concept and explains the background behind it.

Metadata is another area where the DAM value chain concept may provide some potential opportunities to deal with the increasing sophistication of Digital Asset Management integration requirements and the impact it might have on metadata as used to classify and isolate digital assets.

For corporate DAM (especially marketing departments) many end users would prefer to avoid time-consuming cataloguing and upload and catalogue in a single operation with minimal mental effort required.

By contrast, those who have an association with preservation, academic or cultural use cases are more likely to be more used to sophisticated metadata structures where multiple intersecting repositories often need to be integrated together in a non-trivial fashion. The intellectual challenge required in this scenario is often part of the overall research interest users take in the subject matter.  That doesn’t mean they always love doing it, but the classification task is an imperative element of what they do.

The bad news for corporate users is that once you start integrating numerous systems with DAM, for example, CRM or Business Intelligence, the volume and complexity of metadata mushrooms and the task acquires many of the characteristics of a preservation solution – even though the subject matter is totally different.

Busy end users such as marketing managers won’t have the time for this, but they will need this cross-application integration so they can leverage the data contained within to get more competitive intelligence about the digital assets they are using and what works or does not.

We covered how concepts like how Big Data might start to work with DAM last year and as that trend accelerates, users will start to think increasingly how they can use it with their DAM systems to develop more informed marketing strategies. For all this to be effective, DAM developers will need to come up with tools that allow end users to manage this complexity. That will require both automated features that suggest metadata associations and also more efficient user interfaces for end users to decide whether to take them up or not.

It will be increasingly complex for vendors to offer all this in-house. The level of information science expertise required to get to grips with the problems will become a specialist job.  Those that try to ignore it won’t be able to continue to compete because their products won’t be versatile enough. Vendors that can answer the technical complexity but only offer interfaces that are excessively fiddly and complex will find their frustrated end users deserting them.

While the Semantic Web still hasn’t really delivered on its promise so far in terms of DAM cataloguing automation (in my view) the skills needed to understand it properly are transferable to complex DAM metadata problems of the type described and I predict some further cross fertilisation from that sector as a result.  I do find an increasing number of information scientists getting involved in Digital Asset Management and while they can come up with some hair-raising ideas that give software developers brain-ache, you often need someone on the delivery team who is able to think conceptually and relate that back to real world metadata problems in the way they are usually able to.

This isn’t to say that complex poly-hierarchical controlled vocabularies will become the order of the day for general day-to-day DAM system use, the skill will be delivering users precisely the level of metadata detail they need as well as giving them the tools to traverse that complexity curve at their own pace.

A DAM value chain might offer an opportunity to separate a digital file from metadata and other associated asset data so you could more easily delegate the task of managing it. There is an argument that workflow is still metadata (which I can agree with) but for these purposes, it might be better if that was separate (but integrated) rather than enmeshed into the same system database.

As the volume of assets stored in DAM increases, metadata is going to get even more crucial than it is now. Further, simple keyword searches and copying folder structures from your old shared folder probably isn’t going to work very well for much longer.

When the first TV sets were invented, they were styled a lot like radios because that’s what people were familiar with. I predict the same trend will play out with DAM; users will want to replicate older metaphors because they mirror their existing expectations. Pretty soon (especially when you get a newer generation of users who have less experience of that period) those metaphors will get dropped for something that is better suited to the task and more like what you would come up with if starting from scratch without an existing frame of reference to remain within. In combination with the integration demands, we will see significant changes to metadata both in terms of cataloguing and searching, which, conveniently, we plan to cover next.

Related posts:


{ 3 comments… read them below or add one }

David Diamond January 18, 2013 at 5:47 pm

I wonder if our fear of these increasing complexities comes from the possibility that we’re going out about this all wrong. For example, right now we assign metadata to files and call those “digital assets.” So when you think of the ever-increasing numbers of files, and the idea that each requires its own metadata lovin’, it’s no wonder the future seems daunting.

But what if the file is nothing more than a metadata attribute of the content? In other words, let’s say that the your words above are the entity to which we assign metadata values. Not the HTML that sits beneath this page, and not the word-processing document you might have used for the original authoring.

If all of the metadata assignment was linked to the core content, then it could flow it into the downstream uses for that content and, before you know it, you wouldn’t even consider the .doc vs the .html as metadata anchors anymore.

Even better, when indexing new content (notice I didn’t say “files”), you would communicate with the DAM in terms of the purpose of that content, not the type of file. DAMs today don’t know the difference between a meeting agenda and a literary masterpiece, if they both arrive via the .dotx express. Yet, shouldn’t the metadata added and tracked for both be unique to the purpose of that content?

Do you want to add a metadata field to your DAM for “Meeting Date” just because some of your assets might be materials used in a meeting? Most DAM managers would say no. But then how to do you find all those assets associated with a given meeting? The fact is, you can’t unless there is some metadata that flags the content as being associated with that meeting. And without a dedicated “Meeting Date” field or category, you’re left hoping users will enter the data in some consistent way using a Notes field, or some other unstructured option.

Good luck with that.

Borrowing your example of the radio and TV, today’s DAM’s are modeled on a file-centric universe that is nothing more than an evolution of the file system. But it’s time for DAMs to adapt. Our “master” is no longer the file system; it is the content. DAMs need to become content-focused, not file focused.

And, yes, it’s no secret that the forthcoming version of Picturepark has been re-engineered to support a content-focused approach to DAM. But I don’t want this to seem like a commercial for that. I really do think the solution to our fear of the increasing complexity is to stop thinking in terms of files and start thinking in terms of content.

Instead of metadata being just “information about information,” let’s let it become “information within information,” and let’s let the file become “the wrapper around all the information.”

Tony Brooke January 27, 2013 at 4:30 pm

David,

Your comment is thought-provoking, and reminds me of the FRBR entity-relationship model, which allows us to further conceptualize “content” at multiple levels: work, expression, manifestation and item.

http://en.wikipedia.org/wiki/Functional_Requirements_for_Bibliographic_Records

While in that Group 1 example (Beethoven’s Ninth Symphony) the “item” refers to a physical item, the same approach could be applied to digital assets, as in your example of seperating the purpose of the content from the type of file.

Margaret Warren June 10, 2013 at 12:07 am

Nice article. Tim Strehle mentioned this article in a post he wrote about my software: ImageSnippets which is a high-level prototype (right now) for using linked data to create html files that link to an image. I like a lot of the points you make in your article. I am keen on discussing how the software creates changes in the DAM process (as it can be done in conjunction with publishing). Also, the complexity curve – how it relates to a new way of thinking about the descriptions…and also the flexibility of adding layers of data over time – so that adding things like ‘meeting date’ are not that hard (using a linked data technique) to add and even remove later.

Leave a Comment

Previous post:

Next post: