Coming soon: “An AI assistant that does the boring stuff for you. MATE supports you in your small boring tasks, allowing you to focus on the not boring things. Ask it to rearrange elements, create a color palette, change the stroke for hundreds of items, apply random opacity to selected items, rename variables, and much more.”
A retrospective on an issue with Make Designs from Noah Levin, a VP of Design at Figma. First, a reminder on how the feature works: “[…] Make Designs feature employs three parts: a model, some context, and a prompt. This feature currently uses a collection of off-the-shelf models like OpenAI’s GPT-4o and Amazon’s Titan model—the same generally available models that anyone can use—and we have not done any additional training or fine-tuning. To give the model enough freedom to compose designs from a wide variety of domains, we commissioned two extensive design systems (one for mobile and one for desktop) with hundreds of components, as well as examples of different ways these components can be assembled to guide the output.”
What went wrong: “We carefully reviewed the underlying design systems throughout the course of development and during a private beta. But in the week leading up to Config, new components and example screens were added that we simply didn’t vet carefully enough. A few of those assets were similar to aspects of real world applications, and appeared in the output of the feature with certain prompts.”
Artiom Dashinsky asked a lawyer to check how Figma AI affects his work’s copyright. The good part: “You own the copyright for your work. You also own the copyright for the work Figma generates for you with AI.” The bad part: “Let’s say you create a mood board with screenshots of others’ designs. You don’t own the copyright for these designs, but now you’ve allowed Figma to train their AI on it. Now you’ve violated the copyright of the original owner.”
Ridd noticed that designers who can code spend more time sketching their ideas and less time in Figma. This approach isn’t common because it still takes too long to code designs, but AI will change that. What if instead of generating polished mockups from text prompts we used AI to turn wireframes into frontend code, applied our design system, and tweaked the visual direction based on the provided mood board? (This is just one of the ideas explored in the new section of Dive.)
Building a Figma plugin with a server side and API calls in 2 hours using Claude AI.
Rogie King has another example of roughening up icons for wireframes.
Design Systems WTF podcast from zeroheight: “AI tools are transforming the landscape, making it easier than ever to create and design. Is this making everyone a designer? Will design system makers have to herd even more cats? We’ll be joined by special guest Pablo Stanley, the brilliant co-founder of two AI-based design tools, Musho and Lummi. We’ll unpack the potential of AI-based design tools and some risks. Join us for a lively conversation filled with spicy takes about how AI is reshaping the boundaries of design.”
Abdus Salam, Product Designer at Meta, writes at UX Collective: “The future belongs to designers who can master AI, not be mastered by it. Our value lies not just in our technical skills, but in our creativity, our empathy, and our ability to wield these tools in service of crafting experiences that resonate on a profoundly human level.” Also: “while AI can help us reach “good”, achieving “great” still requires human ingenuity and an unwavering commitment to quality.”
Mia Blume from Designing with AI with a controversial take: “In fact, I think Figma AI changes nothing for design. […] Besides the immediately useful feature of smart naming, Figma AI doesn’t alter the existing trajectory. This isn’t a dig on Figma, or its role in the future of tooling. It’s more that as a discipline, we were already “here”—some people just didn’t realize it.” If you’ve been paying attention to AI-related links in this newsletter, you would agree with her points.
Bingo: “The real weakness that jeopardizes our field has existed long before the invention of generative AI tools for creatives. If our value (even if it’s only perceived) lies solely in drawing boxes, then we will inevitably become obsolete. And if we remain focused on the wrong things, we will miss the moment in which we could do something about it.” In the end, she suggests three key areas that design leaders can focus on.
In this episode of Dive recorded at Config, Ridd talks to Figma design engineer Vincent van der Meulen about how the new Visual Search feature was born from a mid-project pivot. Don’t miss Vincent’s original pitch video for visual search in Figma.
A very timely episode of Dive, where Ridd interviews Jordan Singer live at Config about his journey from the Diagram acquisition to Figma’s 2024 AI release. In the middle, they discuss how Figma’s generative features work and why they needed to create a UI kit. (A funny inception moment — at 45:22, I’m coming into view to take this picture.)
I did not expect to see Adobe as an example of best practices: “Adobe has seen massive outcry from its customers, when their old T&Cs suggested Adobe *could* train on customer work. This is why I’m baffled Figma enrolls paying customers (if they are non-enterprise) to GenAI training, by default.”
Jay Peters from The Verge spoke to Kris Rasmussen about the issue. “We’re doing a pass over the bespoke design system to ensure that it has sufficient variation and meets our quality standards. That’s the root cause of the issue. But we’re going to take additional precautions before we re-enable [Make Designs] to make sure that the entire feature meets our quality standards and is consistent with our values.”
The next day, Dylan Field posted a thread stating that “the accusations around data training in this tweet are false” and reiterating that Make Designs “uses off-the-shelf LLMs, combined with design systems we commissioned to be used by these models.”
The Make Designs feature was disabled until the team completes a full QA pass on the underlying design system.
Last Monday, Andy Allen from Not Boring Software asked Figma AI to design a “not boring weather app.” The generated result was almost a copy of Apple’s Weather app on iOS, even when using a different prompt. CTO of Figma Kris Rasmussen commented that they’re investigating and clarified that “there was no training as part of this feature or any of our generative features,” so the similarities are a function of the 3rd-party models and commissioned design systems.
Contextually rename and organize all the layers in your file. Figma AI will choose a name by using a layer’s contents, location, and relationship to other selected layers.
Use “Rewrite this…” to generate copy from scratch or tailor your copy’s tone according to your intended audience. Use “Shorten” to rewrite any text layers you need to be more concise. “Translate to…” can help you preview what your UX copy will look like in another language.
See also Replace text content with AI on using text context from the first element in a series of duplicated elements to populate content in the remaining elements.
Make prototype lets you create interactions and connections between frames in your selection. This is helpful if you want to build a basic prototype flow quickly from your designs. This feature can create simple flows between a selection of top-level frames, add interactions to Back or Next buttons, and link to individual frames from a navigation menu.
Software Engineer Jediah Katz shares 5 of his favorite tips for making the most of the “Make prototype” AI tool: name your layers, properly group layers, select only interactive elements instead of entire screens, review the results, and undo if unhappy.
“Make Designs, which lives in the new Actions panel, allows you to quickly generate UI layouts and component options from text prompts. Just describe what you need, and Figma will provide a first draft to help you explore various design directions and kickstart your process.”
See also Make an image with AI on how to make images to add to your designs and remove the background from any existing image.
Great observation from Nate Baldwin on the new “Make designs.”