Understanding DAM System APIs: A Primer For Non-Technical DAM Users

This article is going to be restricted to DAM News subscribers after 31st August 2020. It will be free to access, but you must have a DAM News Subscriber account to read it. Register Now.
Subscription Only

This feature article has been written by Ralph Windsor, Editor of DAM News and Project Director for DAM Consultants, Daydream


The role of APIs in Digital Asset Management has been steadily increasing for some time now and nearly all DAM vendors have implemented them into their products.  A number of DAM users, however, are unfamiliar with what APIs are and why they are important.  This article aims to provide some rudimentary guidance about them so they can better understand the subject and their wider relevance to DAM initiatives and the context of APIs in digital asset supply chains.

What are APIs?

The letters stand for ‘Applications Programming Interface’.  In simple terms, an API allows one computer program to issue instructions to control another as though a human being were using it.  As such, it provides one application the means to leverage the functionality of another.

Why might we use one?

Integration and automation are the most common reasons to use an API.  Integration means that a third party application like a Web Content Management System (WCMS), Product Information Management (PIM), HR, or CRM application can access assets and/or other data with a DAM (and potentially send it to the DAM also).  Automation is the other typical use-case. An API can be used to mechanise some repetitive tasks, providing they can be defined properly.

What is the relationship between APIs and interoperability?

Interoperability means the ability for different systems to exchange information; to speak the same language, to use an analogy.  APIs and interoperability are related to each other but are not the same thing.  The API is the means by which the exchange of data occurs; interoperability is whether or not two systems can understand each other.  Interoperability does not just take place between DAM systems but could also be ancillary tools (products which extend and enhance the capabilities of a DAM system) and the aforementioned related products like WCMs, PIMs etc.

Is this only relevant for enterprise DAM users?

Not anymore.  At one time, DAMs would only need to integrate with other systems if they were part of an enterprise solution, but that is no longer the case and even departmental or studio DAM systems need to exchange data with a range of other technologies.  Even entry level DAM systems designed for quite small teams will now usually have API features to reflect this increased demand.

So if we have an API, we can integrate our DAM with anything?

Unfortunately it’s not that simple.  Firstly, not all APIs are built equally and some have more capabilities than others. Second, there are no agreed API standards for DAM systems (see the interoperability point previously).  While having no API for your DAM means it will be very difficult (if not impossible) to do any kind of automation or integration, the mere fact that one exists is not sufficient either.  This is very common misconception among many DAM users.  A software application API is more than sophisticated than a plug socket, the API has to include all the functionality you require (and supporting materials like documentation so those with the required technical skills can connect to it).

What makes an API good or bad?

The functionality provided, documentation, scalability and security are some of the major determinants of an effective API, below I will consider each of them:


Ideally, everything you can do via the human interface to your DAM you can replicate via the API also (see the ‘API First’ section later).  This means all the common activities: logging on/off, uploading, cataloguing, searching, managing collections, modifying metadata, permissions, workflows, downloading, reporting, auditing and administration etc.  Many DAM systems (especially older ones that have not been updated for a while) may have whole sections of functionality that are not possible using the API.  This could present a problem if they are needed for your automation or integration task.


Having a great DAM API is of limited use if no one can figure out how to use it.  As such, the counterparty (whatever needs to exchange data with your DAM) will require up-to-date and comprehensive documentation about how to use the API with examples and copious relevant background technical information.

A few vendors have submitted their API documentation to our DAM Standards repository, others have links to them on their websites but some are still reluctant to provide public access to their documentation.  The two arguments frequently advanced against doing this are that it would pose a security risk or the information is ‘commercially sensitive’.  Unless the API contains security flaws, the former is a bogus point (in the same way that having access to the manual to a safe does not typically provide a thief with much in the way of useful information about how to break into it).  The commercially sensitive argument is highly questionable these days and usually I take unwillingness to provide API documentation as likely evidence that either the application doesn’t have one, or alternatively that it is so limited that the vendor is embarrassed to show it.  Simply asking to see API documentation can reveal a lot of useful information about vendor capabilities (or their lack of them).


Scalability refers to how well a DAM API can handle large numbers of users all accessing it simultaneously over an extended period of time.  DAM software that does not scale can often survive a sudden spike in the number of human beings using it because there are often pauses between their interactions or the level of usage will drop off outside periods of peak demand.  In the case of the API, however, the ‘user’ is another system that can potentially issue hundreds of thousands of API requests with minimal intervals between them.  For automation tasks, these may occur over an extended period of time lasting hours or even days.

Some APIs have lots of functionality and are well documented, but become unresponsive under a lot of pressure.  These can have consequential implications for the stability of the rest of the DAM and other human users.  Different products have a variety of methods of dealing with this, from doing nothing at all (i.e. they just crash) through to throttling requests to prevent overloading or (the best method) they have true scalability where they seamlessly increase capacity to handle the load and then decrease it when no longer needed.  Preferably this is all automatically handled by the software itself without the need to provide advance warning to helpdesks etc.

In some cases, a combination of throttling and dynamic scaling might be appropriate (especially if the cost of running at full-bore for a long period of time will be expensive).  Irrespective, of how it is managed, however, those sort of choices should be conscious ones you can make yourself, not arbitrary limits imposed upon you by the vendor because their API hasn’t been designed with scalability in-mind.

From my experience of reviewing DAM systems, the price of a given product is no judge of whether its API is genuinely scalable.  I have seen DAMs with a suggested seven figure implementation budget fail spectacularly while those with a low six figure one remain fully operational and stable under sustained pressure (the same applies in the five figure league too).  In respect of scalability, don’t believe the marketing, ‘enterprise’ branding nor fine words from the sales personnel when it comes to this point.  All claims need to be carefully tested and nothing should ever be taken at face value.


The process of logging on to a DAM system, whether a human being does it or another system is called ‘authentication’ and it means identifying a single user so permissions and privileges can be assigned to them.

There are number of different ways APIs handle the authentication problem and they cover the full range in terms of security.  The most basic method is to sends a username and password (as happens with a regular human user).  This is conceptually simple to understand, but if the instructions are visible (or ‘exposed’) to anyone else, they can simply copy the details and use them to gain unauthorised access.  This poses some substantial security risks which are sometimes referred to as replay attacks.  What this means is that whoever wants to break into your DAM can just copy and paste the commands to login and gain access.

Unfortunately, just under  half of the DAM systems I have seen in the last three or four years still use this technique.  If you review the API documentation for your DAM and see that this approach is still taken by the vendor, insist that they change it and propose some risk mitigation strategy if you have a critical deadline that prevents the modification being carried out right away.

One step up from using the raw username and password in the API call is to use a key (a string of numbers and letter) rather than an actual password.  This closes off the risk of someone using API credentials to login to the DAM user interface, but they can still be copied and re-run.  There are further variations, such as using one key to login which returns a token to be used for operations (and which expires after a given amount of time).  These do reduce the risk, but they are still flawed, especially if the main login key is exposed.

A more optimal approach to API security is to use a framework like OAUTH2 which can more closely control what third parties have explicit access to your API, so even if the keys are copied, access attempts will fail.

There are a range of other API security considerations (e.g. encrypting data while it is being transmitted).  As alluded to earlier, this topic is multi-faceted and complicated. If you lack technical expertise to assess API security, one technique is to ask for an API security policy and see if it includes some of the subjects discussed.  Ideally, the security documentation should be separate (or at least in its own dedicated section).  If the vendor is at least aware of these kind of issues that points toward them being more likely to address any shortcomings.

Deciding between good and bad APIs sounds complicated?

This is true and it is also a subject that cannot be ignored.  Furthermore, not all of the points discussed in the preceding section reduce down to ‘good’ or ‘bad’, some are qualitative or design-oriented decisions that are good in one context but maybe less so in another.

Preferably, you will be able to call on some advice whether from an in-house colleague or a consultant.  If both of those options are not available then the next best approach is to focus on the vendor’s API documentation.  Typically, those who lack credible APIs that do not adequately address the points described previously will have quite skimpy documentation because they are unlikely to have been asked about their APIs very often by other customers.

What is “API-first”?

API-First is becoming more prevalent for lots of applications, including Digital Asset Management systems.  In simple terms it means the user interface is implemented entirely using the vendor’s API.  By requiring themselves to use the API for everything in the user interface, vendors can effectively guarantee that anything a human being can do, a machine (using the API) can too.  As well as expanding the range of integration and automation tasks an API can be applied to, it also makes it more straightforward to introduce alternative interfaces tailored to different users or special extensions etc, without having to create entirely custom solutions just for a single user.

If the DAM system was developed more than about five years ago then the vendor will almost certainly need to re-develop their entire user interface so that it is API-First. As many are now discovering, this is quite a time-consuming and technically demanding exercise.  Of the vendors that have pursued this objective for their products, many might be more accurately described as ‘API Nearly First’ in the sense that they do not completely use the API for everything, there are still some elements of their front-end user interface that do not.

Unless care is taken, more honest vendors will get penalised for not having truly API-First interfaces while more disingenuous ones will claim they do have one and hope that they do not get called to account over it (at least until after a contract is awarded).  If the DAM is genuinely API-First, it should theoretically be possible to create a log of all of the API commands that were generated when a user logged in to carry out a series of tasks.  You can ask to see this and (with a few modifications for authentication) it should be possible to re-run this again and get an identical result.  The details of the implementation might make that easier or harder depending on the system in question, but simply asking to see it and observing the reaction from vendors may give you a few clues.  For those with a more technical background, I wrote an article about the use of API logs and transaction logs last year: Building Time Machines For Digital Assets.


This article has hopefully given you some points to consider when evaluating the role of APIs and the suitability (or otherwise) of different DAM systems to base your Digital Asset Management initiative around.

Unlike a number of other DAM-related topics, this is one where having access to some technical expertise is far more important.  Integration projects where APIs will be prominent are demanding and involve negotiating complex political and systems-related challenges, ensure you have someone on-board who has sufficient experience to help you achieve your objectives.

Share this Article:

Leave a Reply

Your email address will not be published. Required fields are marked *