Feature Article: Examining Professional Ethics in the Age of AI-driven Design
Paul Melcher, visual tech expert and founder of online magazine Kaptur has kindly contributed an article exploring the topics of bias, plagiarism and ownership within AI-assisted creativity such as graphic design, illustration and photography. This detailed and thoughtful piece examines a broad range of legal and ethical issues, and how AI training data, methods and models, typically Western-centric, can unintentionally introduce and amplify culturally insensitive biases, even down to the significance and symbolism of specific colours.
“The potential for AI to produce culturally insensitive content extends beyond text to graphic representations and object symbolism. Color symbolism, for instance, varies significantly across cultures. The color orange serves as a prime example: in Southeast Asia, it represents spiritual dedication, as seen in monks’ saffron robes, while in Western cultures, it’s associated with autumn, Halloween, and warning signs. An AI system might not inherently understand these nuances, potentially leading to designs that miss the mark or even offend certain audiences.” [Read More]
Paul continues by suggesting a number of strategies to help creatives, designers and marketers to mindfully and responsibly navigate these challenges, and how investing in cultural education and ensuring diversity within your own organisation is paramount to raising awareness of the potential issues that can surface. The article also explores the growing issue of plagiarism and copyright infringement when implementing AI within graphic design, and highlights a number of practices that should allow designers to both harness the potential of AI whilst safeguarding their own work and the integrity of their company’s brand and reputation.
“Staying informed about evolving copyright and trademark laws in the AI space is vital for designers working with these technologies. The legal landscape is rapidly changing, and what may be permissible today could be problematic tomorrow.” [Read More]
The article also covers the highly contentious issue of ownership of AI-generated content amidst a legal system that still appears to be finding its feet. Paul explains how the uncertainty arising from the use of AI has created a complex legal grey area for designers and brands alike – if an individual or organisation cannot own an AI-generated image, then who does?
“The ambiguity in ownership raises several practical concerns for designers and businesses. If no one definitively owns the result of an AI-generated image, it potentially leaves the door open for competitors or other parties to use or repurpose that content without legal repercussions. Moreover, the current legal uncertainty could lead to complex disputes in the future. For instance, if a designer uses an AI tool to create a logo for a client, questions may arise about who truly owns the design – the designer, the client, or neither party. This ambiguity could complicate contractual relationships and potentially lead to legal conflicts.” [Read More]
Paul concludes with a series of tips and strategies that can help to safeguard those working with AI-generated content, from establishing robust, brand-specific AI guidelines and implementing contractual agreements, through to focusing on the value added by exclusively human attributes such as a sense of context, selection, refinement and applying discretion and insight when incorporating AI-generated design elements.
“By embracing AI responsibly, designers can push the boundaries of their craft while upholding essential ethical standards. Continuous learning, open dialogue, and a commitment to ethical practices will be key as we navigate this new frontier. Ultimately, the goal is to view AI as a tool that enhances human creativity rather than a replacement for it.” [Read More]
You can read the full article at the link below.
Share this Article:
There are many interesting DAM implications. A good portion of the ethical issues with AI come from imbalanced data resulting in biased outputs. By having a strong DAM foundation, such imbalance data can be mitigated. Since we are all DAM professionals, we can help create databases that will result in AI outputs that reflect accurate information and diverse perspectives. Having a good DAM professional influencing AI development, we can more easily implement strategies for ethical AI. Additionally, the experience that DAM professionals have in negotiating licensing agreements can help inform the solutions for copyright issues that arise with AI; like issues about who owns the output of an AI, the developer, the provider, or the prompter.