What if you could generate visual materials such as ads, illustrations and app layouts with a touch of a button? You already can. Kind of.

Building a truly intelligent system that understands what kind of materials should be created to solve the needs of a particular audience – a system that understands human emotions, behaviour and the concept of beauty – is still a bit far off, at least for now.

However, creating a specialized deep learning system that automates certain parts of the design process designers now do manually is totally feasible, and we are starting to see nice tools surfacing that are meant to do just that.

Examples of creative automation tools powered by deep learning

Colormind.io
Colormind specializes in making colour theory easily accessible. It’s an efficient tool for trying out colour combinations and exploring different variations.

Google’s Autodraw
Just plain fun, but also usable for making quick illustrations, Autodraw tries to understand your scribbles and match an existing symbol to them – instead of relying on word-based search.

Fontjoy
Combining typefaces can be hard. Fontjoy leverages deep learning for making font pairings. But as with many deep learning tools, the tools themselves are not that good at judging whether the results are aesthetically pleasing (such things are often difficult to ”calculate”), so take the suggestions with a grain of salt.

Brandmark.io
Brandmark uses deep learning to create logos. It combines attributes you provide with your company name to create a logo. You probably wont create legendary timeless logo designs with it, but it’s a surprisingly good tool for making quick logos when you need something rudimentary fast.

Adobe Sensei
Adobe’s product family of AI-powered tools. It showcases a lot of small innovations for automating parts of the creative process we’ve been doing ”by hand” for a long time.

Here’s a good practical example on how to speed up the creation of different variants when creating a design for a poster, using Adobe’s new tools.

Good news designers: we still need designers

What is common for most of the deep learning tools is that they remove tedious parts of the design process, making it faster to prototype and test different combinations. But the tools, while powerful, do not magically know if what they are generating is the right choice for the situation – or even if it actually looks good. You still need a designer to curate these generated results.

We’re at a similar situation as we were when desktop publishing was becoming more common 33 years ago

Even if the tools are still lacking and have a long way to go, I think what we’re looking at here is no less than a revolution similar to when desktop publishing was becoming more common 33 years ago.

Movable type on a composing stick on a type case. Image from Wikipedia.

The tedious analogue methods of creating graphic design and magazine layouts went extinct as new digital tools emerged – making things like fast font changes, layout tweaks and non-destructive photo editing available for everyone willing to adopt the new tools.

These tools eliminated the time-consuming aspects from the design process, enabling designers to experiment and produce more in a shorter timeframe.
So while these digital tools make the creative process a lot faster and flexible, you still need someone with an understanding of design principles to judge whether the results are usable or not. This also holds true for deep learning tools.

For now.

Example case: Netflix

Netflix uses an algorithm that analyses your taste in movies with your viewing history to create preview images of films and tv shows that it believes will appeal to you.

For example, if you watch a lot of romantic movies, the algorithm will emphasize that aspect in the preview images for you. Or, if the algorithm detects you like comedies, the preview of the 1997 drama Good Will Hunting will show the co-starring comedian Robin Williams in the forefront. Read more about the personalization algorithm on Netflix’s tech blog.

Example case: Alibaba AI designer – Luban

“During this year’s Single’s Day Event, LuBan designed a whooping 400 million banners for an even wider range of products. If we assume it takes a human designer 20 minutes to design one single banner, then we will need 100 designers to work non-stop for 150 years to produce the same amount.”

— Rexroth Xu, Sr. UX Designer, Alibaba Group

AI platform Alibaba Luban is capable of generating approximately 8,000 different banner designs per second. 
LuBan is powered by machine learning and is trained by millions of sets of design data.

When you work with large volumes of creative content and existing creative elements, AI can bring huge savings in both labour expenses and time. Read more about Alibaba Luban here.

Case example: AirBnb

Airbnb has internally prototyped a tool that turns rough sketches into functional app views, using Airbnb’s own design system. Read more about it here.

Challenges in designing with AI

In addition to making things easier, AI will also come with its fair share of challenges for designers.

Challenge 1: Customers will have new expectations as time progresses.
As time and technology progress, expectations will change. Intelligence in certain parts of the services will be expected and designers will need to keep up with these expectations.
Example: Big players like Apple & Google are already doing a lot of intelligent photo and video organisation and editing on their mobile platforms and this level of automation and intelligence is becoming a standard for many consumers.

Challenge 2: AI experiences need to be designed differently to maintain trust between the system and the user.
In some cases, designing a personality for the AI system is important. In other cases, we need to design around an invisible AI that takes care of things in the background automatically. Both need to be designed so that the user doesn’t feel like they’re giving away control.

Challenge 3: Products and experiences will become more dynamic. Our designs, workflows and tools need to be able to do that too.
Designing for different screen sizes is not enough anymore. Instead, designers should start defining the rules for how the product & design should work, and design in accordance with real dynamic data. Design systems are a good way to approach this ideal.

Should designers start looking for new jobs?

Faster and smarter tools will enable designers to focus more on what really matters: the users and the product itself.

Even when designs are getting faster and easier to create, you still need good quality control and creative direction to put all the pieces in right places. 

And to do that, you need to understand how the technology, design and human beings connect. This is where the designers of the future will stand, finding real-life problems to solve and making the pieces fit.

Thus, I believe that faster and smarter tools will enable designers to focus more on what really matters: the users and the product itself.

So no, designers don’t need to start looking for new career paths (unless your only job is creating ad variations, then you should probably branch out). Instead, we should start thinking more in systems and rules to be able to adapt to this new world of automatization and deep learning.


This post in an adaption of a talk given at Turku AI Society event held 3.10.2018 at Taiste’s HQ. Take a peek at the atmosphere of the event ”AI & Design: Design, Ethics & Social Robots”.

Voisit olla kiinnostunut myös näistä: