In this episode of Dive recorded at Config, Ridd talks to Figma design engineer Vincent van der Meulen about how the new Visual Search feature was born from a mid-project pivot. Don’t miss Vincent’s original pitch video for visual search in Figma.
I did not expect to see Adobe as an example of best practices: “Adobe has seen massive outcry from its customers, when their old T&Cs suggested Adobe *could* train on customer work. This is why I’m baffled Figma enrolls paying customers (if they are non-enterprise) to GenAI training, by default.”
A very timely episode of Dive, where Ridd interviews Jordan Singer live at Config about his journey from the Diagram acquisition to Figma’s 2024 AI release. In the middle, they discuss how Figma’s generative features work and why they needed to create a UI kit. (A funny inception moment — at 45:22, I’m coming into view to take this picture.)
Jay Peters from The Verge spoke to Kris Rasmussen about the issue. “We’re doing a pass over the bespoke design system to ensure that it has sufficient variation and meets our quality standards. That’s the root cause of the issue. But we’re going to take additional precautions before we re-enable [Make Designs] to make sure that the entire feature meets our quality standards and is consistent with our values.”
The next day, Dylan Field posted a thread stating that “the accusations around data training in this tweet are false” and reiterating that Make Designs “uses off-the-shelf LLMs, combined with design systems we commissioned to be used by these models.”
The Make Designs feature was disabled until the team completes a full QA pass on the underlying design system.
Last Monday, Andy Allen from Not Boring Software asked Figma AI to design a “not boring weather app.” The generated result was almost a copy of Apple’s Weather app on iOS, even when using a different prompt. CTO of Figma Kris Rasmussen commented that they’re investigating and clarified that “there was no training as part of this feature or any of our generative features,” so the similarities are a function of the 3rd-party models and commissioned design systems.
Contextually rename and organize all the layers in your file. Figma AI will choose a name by using a layer’s contents, location, and relationship to other selected layers.
Use “Rewrite this…” to generate copy from scratch or tailor your copy’s tone according to your intended audience. Use “Shorten” to rewrite any text layers you need to be more concise. “Translate to…” can help you preview what your UX copy will look like in another language.
See also Replace text content with AI on using text context from the first element in a series of duplicated elements to populate content in the remaining elements.
Make prototype lets you create interactions and connections between frames in your selection. This is helpful if you want to build a basic prototype flow quickly from your designs. This feature can create simple flows between a selection of top-level frames, add interactions to Back or Next buttons, and link to individual frames from a navigation menu.
Software Engineer Jediah Katz shares 5 of his favorite tips for making the most of the “Make prototype” AI tool: name your layers, properly group layers, select only interactive elements instead of entire screens, review the results, and undo if unhappy.
“Make Designs, which lives in the new Actions panel, allows you to quickly generate UI layouts and component options from text prompts. Just describe what you need, and Figma will provide a first draft to help you explore various design directions and kickstart your process.”
See also Make an image with AI on how to make images to add to your designs and remove the background from any existing image.
Great observation from Nate Baldwin on the new “Make designs.”
Designer Marco Cornacchia explains how it works. See also his follow-up thread on why the new Asset Search marks the end of the “design graveyard.”
Design Engineer Vincent van der Meulen explains how it was built.
Figma’s approach to AI model training: “All of the generative features we’ve launched to date are powered by third-party, out-of-the-box AI models and were not trained on private Figma files or customer data. We fine-tuned visual and asset search with images of user interfaces from public, free Community files.”
Admins have control over AI use and content training, which they can turn on or off with two new settings anytime. By default, content training is enabled for Starter and Professional plans and disabled for Organizations and Enterprises. The content training setting takes effect on August 15th, 2024.
“We’re introducing Visual Search to help you more easily find what you’re looking for with a single reference. Search for anything from icons to entire design files with a screenshot, a selected frame, or even a simple sketch with the pencil tool, and Figma will pull in similar designs from team files you have access to. And with improved Asset Search, Figma now uses AI to understand the context behind your search queries. You can easily discover assets — even if your search terms don’t match their names.”
A new landing page for all AI features: “Get started faster, find what you’re looking for, and stay in the flow. Make space for more creativity.” See also a one-minute demo reel of new features.
Remember that they’re currently in beta and will become a paid feature next year: “Our AI features will be free for all users during the beta period, which runs through 2024. As we learn how these tools are used and their underlying costs for Figma, we may need to introduce usage limits for the beta. When Figma AI becomes generally available, we’ll provide clear guidance on pricing.”
Introducing new Visual Search and upgraded Asset Search, AI-powered text and content generation tools to help you quickly populate your designs with realistic content, image background removal, turning static mocks into interactive prototypes, automating layer naming, and even design generation from text prompts. “Whether you’re searching for inspiration, exploring multiple directions, or looking to automate tedious tasks, we’re building Figma AI to unblock you at any stage.”
Gabriel Valdivia on Figma AI: “Right before Figma’s keynote announcing the “make designs” button, I “made code” with another app. On one hand, people can now use Figma to replace designers, while on the other hand, I’m using Cursor to replace engineers. I’m stuck in the middle feeling simultaneously disempowered as a designer and completed empowered to make new software.”
June 26th, 7–10 PM. “Join us on the first evening of Config for a special event celebrating designers at the forefront of building with AI. Connect with over 50 designers and see lightning demos from companies like Perplexity, Visual Electric, Chroma and more.”