External DAM Hosting – The Questions You Should Be Asking

The following article was contributed by Dan Huby of Montala, who is one of the original developers of the ResourceSpace open source DAM system.

Traditionally, Digital Asset Management systems have been locally installed applications that are accessed and managed by a small internal team, typically the marketing and/or design teams.

Over the past few years a trend has emerged towards web based and more user friendly DAM systems enabling wider access to the system – across the entire organisation and perhaps beyond, sharing assets directly with external partners, customers and suppliers around the world.

A step further in this direction – and something that is increasingly being offered – is hosting of your DAM system entirely externally or “in the cloud”. The typical model here is known as Software As A Service (SaaS). You pay a fee – typically annually – which encompasses software licensing (if applicable), support and hosting.

This can be very attractive, as all the technical complexity is handled entirely by a third party. You simply need to log in and use the system. There’s usually little need to involve your (typically over-stretched!) internal IT team at all.

However, handing your business critical assets over to a third party is not without risks, and those risks may not become apparent until the worst happens and your provider sustains a serious outage.

So, what are the risks involved with externally hosting your DAM system?

How they can be mitigated?

The answer? By asking the right questions and therefore carefully vetting DAM SaaS providers.

The risks

From an operational perspective the major risk is loss of access to your assets, either indefinitely due to catastrophic data loss, or temporarily due to a system outage. A well designed backup process, ensuring multiple copies of your data over geographically disparate locations, and both online and offline, will hugely reduce the risk of data loss. A well designed hosting infrastructure with plenty of redundancy and a solid recovery plan will mitigate the disruption caused by a system outage.

Questions you should be asking

As DAM SaaS providers ourselves we are often surprised at how few questions are asked by potential customers, given the rather monumental responsibility they are about to place in our hands. There are a few exceptions of course, usually government and military, who have the necessary processes in place and therefore ask all the right questions. The vast majority don’t have such processes in place, and I hope this article will be of use to organisations looking at hosting their DAM solutions externally.

“Who has physical access to my servers?”

There is often a great deal of consideration regarding the security of your system in terms of electronic access (e.g. passwords, firewalls, and so on) but something that is often overlooked is physical security. There is little use in all of those digital safeguards if it’s possible for someone to walk in to the data centre and carry away all your data.

Where is the server stored? Is it in a purpose built data centre, or it hosted by the provider themselves on-site? A purpose built facility will typically offer much more in the way of physical security – alarms, CCTV, barriers, biometrics and so on.

“How well connected is my system?”

This is an area where SaaS should really shine. A DAM hosted in a purpose-built data centre should offer vastly improved worldwide connectivity compared to an internally hosted solution. But don’t make that assumption – ask the questions. What bandwidth will be available to my system? Is there a monthly transfer limit, and if so, what will I be charged if I exceed that limit?

“How much redundancy is built in to the system?”

It’s crucial that hardware failures are planned for. The most common type of hardware failure by some margin is a disk failure, and most servers are designed to handle this without fuss using a technology such as RAID (Redundant Array of Independent Disks). This essentially means having more disks in place than you need and storing your data several times, so that one or more disks can fail without data loss. Additionally, failed disks can usually be ‘hot swapped’ so that the system doesn’t need to be taken offline.

RAID is nowadays seen as absolutely essential, so you must ensure your provider uses it. It gets a little more complicated than that however, as there are different RAID levels which involve different numbers of disks and degrees of redundancy.

RAID is quite complicated and a separate topic in itself, so briefly, the best providers will use RAID 1+0 (commonly called RAID 10). This means there are two complete copies of your data stored on the live server and multiple drives can fail without affecting your data. To be avoided are RAID 5 and to a lesser extent RAID 6, which allow for one and two drives to fail respectively.

“What happens if the server fails completely, or there’s an incident at the data centre?”

Even with multiple disk redundancy on your server there is still the possibility of a complete system failure resulting in the loss of all disks. It’s therefore essential to have another copy of your data – and this should be at a second geographically separate location, ideally many miles away from your primary server. This second copy of the data must be stored in such a way that it can be easily restored, and the typical solution is to have a second server available at the backup location ready to go. In the event that the primary data centre is completely unavailable for a long period, it’s essential to have the capability to operate out of the second location entirely.

“What automated monitoring processes are in place?”

In the event of a major outage – your system is completely off line – you as the customer shouldn’t be the first one to notice and report this. The provider should have suitable active monitoring in place that will detect any issue and notify them first.

If you need your system to be available outside of usual office hours it’s important to make sure your provider can not only monitor 24 x 7, but also has recovery staff on standby 24 x 7 to respond to the alert and recover your system.

With all the best will in the world, the automated backup processes that are put in place will from time to time fail. It’s important that there are additional checks in place that will ensure that backups have been completed successfully and the copies of data stored are sufficiently recent. The danger is that you only realise your your backup process hasn’t been running when the worst has happened and you need to recover from backups.

“How secure is my server? What access limits are in place?”

If the DAM system you’ve selected is web-based, which is increasingly common, then – unless your asset data is unusually confidential – you will probably need that web access to be unrestricted so you can share assets with your suppliers and customers across the world. However, administrative access should be restricted as much as possible. Typically, secure shell or remote desktop access should be available to those who administer the system – typically the provider themselves. Technologies commonly used here are IP address restriction, VPNs and the use of secure keys rather than passwords.

You should consider implementing HTTPS for the web access – this ensures that all data sent to / from your server – including passwords – are sent securely. This requires an SSL certificate and your provider should be able to assist with this.

“Can I be sent a copy of my data?”

In the event that you wish to change providers – or just a periodic additional backup for peace of mind – you may wish to be sent a copy of your data. It’s worth checking up front if there’s a fee for this service beyond the cost of the media (typically a low cost external USB drive) and the postage.

“How does it compare to hosting internally?”

I’ve seen many situations where a customer has been (rightly) cautious about the prospect of hosting their data externally and therefore would prefer the data to be hosted on-site at their organisation’s premises, instinctively feeling that their data is safer right there where they can see it. Yet after further discussion I’ve found often found that internal hosting has been woefully insufficient when compared against the requirements they are placing on interna . If hosting the solution yourself is an option, don’t forget to subject your internal IT infrastructure to the same rigorous questioning as you’d give to external providers. The chances are, you’ll be surprised.

“Can’t I just buy a cheap hosting package and host the DAM externally myself?”

You might ask why you need to go with a DAM SaaS provider for the hosting of the solution, particularly if you’ve selected an open source solution and therefore you’re free to host in any location you see fit. I would offer a word of caution here: there’s a lot more to successfully hosting a DAM than simply signing up with a hosting provider and installing the software. All of the above factors have to be taken into consideration to ensure your assets are stored properly.

Your digital assets are your critical to your organisation. It’s crucial they are taken care of.

 

More Information

Author: Dan Huby
Company: Montala / ResourceSpace
Website: http://www.resourcespace.org
LinkedIn: http://uk.linkedin.com/in/danhuby/
Montala LinkedIn: https://www.linkedin.com/company/montala-limited

Share this Article:

One comment

Leave a Reply

Your email address will not be published. Required fields are marked *