The Inhumanity Of DRM
The Inhumanity of DRM
We DAM professionals are increasingly fascinated about digital rights management (DRM). We want to protect our content, and we’re constantly on the hunt for the best way of doing so. We want DRM to be transparent and bulletproof, and we don’t want DRM to inconvenience legitimate consumers of our content. (Admittedly, we’re willing to inconvenient the many if we can protect ourselves from the few.)
That in mind, it didn’t surprise me when during a recent dinner with the CEO and Chairman of my company, we got to talking about the relationship between digital rights management (DRM) and digital asset management (DAM). What did surprise me, however, was the turn that conversation took toward the end.
Our conversation started with the three of us agreeing that the concept of DAM-based digital rights management often lulls organizations into a false sense of security about the safety of their content. When people hear the term “digital rights management,” they expect a technological guardian angel that will follow and watch over their content forever.
But this isn’t the case.
Content inside a DAM can certainly be “digital rights managed,” within reason. Permissions can be configured, and watermarks can be applied. Sensitive metadata can be hidden from certain users, and who downloaded what can be tracked.
But then what? How can a DAM protect content that has left the DAM?
We spend millions researching and developing technologies and procedures that promise to offer post-DAM content protection, but how realistic is this? Should we train search engines to crawl the Internet looking for violations? Are you willing to embed virus-like code into your content that “phones home” to let you know where it is?
How much would you be willing to pay for such technologies? How much would you trust them?
More importantly is whether solutions like these would even be practical for you. Let’s take a look at how it might work.
You sell content to User A. At some point, User A inadvertently or otherwise makes that content available to User B. Your super smart Web crawler service discovers the content on User B’s website and reports back to you. You visit User B’s website and decide the evidence of a copyright violation is overwhelming and it’s clearly time to act.
Based on the data your trusty “SpiderCop” Web crawler has provided, you figure you’re one slam-dunk lawsuit away from collecting damages. This really is the way the legal system should work, you reckon: Pay for a Web policing service, then sit back and watch the rewards come gushing in. Thank goodness for SpiderCop, the premiere player in a new genre of software known as Legal Infraction Exposure Software (LIES).
It’s a win for everyone, you figure, except of course, for the criminal—a.k.a. User B.
You contact your attorney and explain the situation. She drafts and sends to User B, via special delivery, a stern take-down notice. Too bad your attorney isn’t as affordable as SpiderCop. Also too bad that you didn’t know before you called her that User B still lives in his mother’s basement and is currently grounded for playing too much Wii.
User B is in clear violation of your copyright, but you know what they say about blood and turnips…and attorneys. Some battles just aren’t worth fighting.
And then there are all those content curators on the Internet. What should we do about them? As the owner of the content, shouldn’t you be the one to decide where that content is discussed, re-tweeted, liked, +1’d or otherwise promoted? Perhaps content curating can be considered some sort of copyright violation too. After all, aren’t these people profiting from your content without your permission?
Filing a massive lawsuit against all of the Internet’s content curators could be exactly what you need to make your SpiderCop ROI work. In fact, if the marketing people of the LIES industry are on top of their game, you’ll easily find white papers to explain exactly how this works, in 5 steps or less.
But let’s return to sanity for a moment: The fact is, unless you’re willing to invest potentially tens of thousands of dollars into your legal “recovery” efforts, the best DRM is going to offer you is the privilege to send toothless take-down notices. Will this satisfy you? Will digital wrongdoers even read your notices? After all, they might have a lot of homework to do.
Finding ourselves at an impasse with regard to knowing how to make a punitive approach to content protection work, my dinner mates and I started to look at this problem from another angle. We questioned whether, as an industry, DAM had lost its way. Had we become so focused on protecting content that we’d forgotten the primary purpose of content?
If a DRM-protected image falls in a forest and there’s no one around to see it, can it change the world?
Many DAM vendors have done terrific jobs of enabling users to control and limit the distribution of their files. But shouldn’t we be doing more by way of helping DAM users get content into the hands of consumers? It should be easier to get content out of a DAM, not harder.
For example, why is social media still an afterthought for so many DAMs? Social media is the new printed letter, flier, circular and bulletin board, all rolled into one. Social publishing should be as standard an output feature for DAM as printing is for word processors.
And here’s another concept to consider: “Curated DAM” I can’t search my Picturepark DAM today and find digital assets that are stored in someone else’s MediaBeacon system. Why not?
If I’m from a medical institution that wants to provide physicians with useful images that can help them identify and diagnose illness, I would be crazy to limit what I offer to only my own collections. If I know another medical institution across the planet has a collection that would enhance my own, I might want to virtually merge those collections for the purpose of offering a more useful and valuable resource to medical professionals. Medical professionals shouldn’t have to look in every nook and cranny of the Internet to find what they need, and neither should anyone else.
But where’s the DAM industry standard that makes this possible?
If you don’t readily see the benefit in this, imagine having hundreds of different Internets: Before you could find what you need, you’d have to know which Internet included that information, and you’d need to know how to access that Internet. In fact, before we had global search engines, this was exactly the way the Internet was.
And this is exactly where we are with DAM today. Google gets it; DAM vendors do not.
What do you say, MediaBeacon? Widen? You with me, Extensis? How about you, ADAM?
Until the day we stop focusing on “owning” and “protecting” and start focusing on “expressing” and “educating”—the primary purpose for content—we will be unable to start this or any other meaningful discussion that measures the benefits of protection vs. proliferation.
But the responsibility for a shift in the way we value content isn’t a burden for DAM vendors only—this is a discussion for the entire DAM community.
If you one day discover a cure for Cancer, will it be more important to you to figure out the best way to protect and monetize your discovery, or will your focus be on figuring out how to get that cure into the hands of those who need it?