Building a Figma plugin with a server side and API calls in 2 hours using Claude AI.
Rogie King has another example of roughening up icons for wireframes.
Design Systems WTF podcast from zeroheight: “AI tools are transforming the landscape, making it easier than ever to create and design. Is this making everyone a designer? Will design system makers have to herd even more cats? We’ll be joined by special guest Pablo Stanley, the brilliant co-founder of two AI-based design tools, Musho and Lummi. We’ll unpack the potential of AI-based design tools and some risks. Join us for a lively conversation filled with spicy takes about how AI is reshaping the boundaries of design.”
Abdus Salam, Product Designer at Meta, writes at UX Collective: “The future belongs to designers who can master AI, not be mastered by it. Our value lies not just in our technical skills, but in our creativity, our empathy, and our ability to wield these tools in service of crafting experiences that resonate on a profoundly human level.” Also: “while AI can help us reach “good”, achieving “great” still requires human ingenuity and an unwavering commitment to quality.”
Mia Blume from Designing with AI with a controversial take: “In fact, I think Figma AI changes nothing for design. […] Besides the immediately useful feature of smart naming, Figma AI doesn’t alter the existing trajectory. This isn’t a dig on Figma, or its role in the future of tooling. It’s more that as a discipline, we were already “here”—some people just didn’t realize it.” If you’ve been paying attention to AI-related links in this newsletter, you would agree with her points.
Bingo: “The real weakness that jeopardizes our field has existed long before the invention of generative AI tools for creatives. If our value (even if it’s only perceived) lies solely in drawing boxes, then we will inevitably become obsolete. And if we remain focused on the wrong things, we will miss the moment in which we could do something about it.” In the end, she suggests three key areas that design leaders can focus on.
In this episode of Dive recorded at Config, Ridd talks to Figma design engineer Vincent van der Meulen about how the new Visual Search feature was born from a mid-project pivot. Don’t miss Vincent’s original pitch video for visual search in Figma.
I did not expect to see Adobe as an example of best practices: “Adobe has seen massive outcry from its customers, when their old T&Cs suggested Adobe *could* train on customer work. This is why I’m baffled Figma enrolls paying customers (if they are non-enterprise) to GenAI training, by default.”
A very timely episode of Dive, where Ridd interviews Jordan Singer live at Config about his journey from the Diagram acquisition to Figma’s 2024 AI release. In the middle, they discuss how Figma’s generative features work and why they needed to create a UI kit. (A funny inception moment — at 45:22, I’m coming into view to take this picture.)
Jay Peters from The Verge spoke to Kris Rasmussen about the issue. “We’re doing a pass over the bespoke design system to ensure that it has sufficient variation and meets our quality standards. That’s the root cause of the issue. But we’re going to take additional precautions before we re-enable [Make Designs] to make sure that the entire feature meets our quality standards and is consistent with our values.”
The next day, Dylan Field posted a thread stating that “the accusations around data training in this tweet are false” and reiterating that Make Designs “uses off-the-shelf LLMs, combined with design systems we commissioned to be used by these models.”
The Make Designs feature was disabled until the team completes a full QA pass on the underlying design system.
Last Monday, Andy Allen from Not Boring Software asked Figma AI to design a “not boring weather app.” The generated result was almost a copy of Apple’s Weather app on iOS, even when using a different prompt. CTO of Figma Kris Rasmussen commented that they’re investigating and clarified that “there was no training as part of this feature or any of our generative features,” so the similarities are a function of the 3rd-party models and commissioned design systems.
Contextually rename and organize all the layers in your file. Figma AI will choose a name by using a layer’s contents, location, and relationship to other selected layers.
Use “Rewrite this…” to generate copy from scratch or tailor your copy’s tone according to your intended audience. Use “Shorten” to rewrite any text layers you need to be more concise. “Translate to…” can help you preview what your UX copy will look like in another language.
See also Replace text content with AI on using text context from the first element in a series of duplicated elements to populate content in the remaining elements.
Make prototype lets you create interactions and connections between frames in your selection. This is helpful if you want to build a basic prototype flow quickly from your designs. This feature can create simple flows between a selection of top-level frames, add interactions to Back or Next buttons, and link to individual frames from a navigation menu.
Software Engineer Jediah Katz shares 5 of his favorite tips for making the most of the “Make prototype” AI tool: name your layers, properly group layers, select only interactive elements instead of entire screens, review the results, and undo if unhappy.
“Make Designs, which lives in the new Actions panel, allows you to quickly generate UI layouts and component options from text prompts. Just describe what you need, and Figma will provide a first draft to help you explore various design directions and kickstart your process.”
See also Make an image with AI on how to make images to add to your designs and remove the background from any existing image.
Great observation from Nate Baldwin on the new “Make designs.”
Designer Marco Cornacchia explains how it works. See also his follow-up thread on why the new Asset Search marks the end of the “design graveyard.”
Design Engineer Vincent van der Meulen explains how it was built.
Figma’s approach to AI model training: “All of the generative features we’ve launched to date are powered by third-party, out-of-the-box AI models and were not trained on private Figma files or customer data. We fine-tuned visual and asset search with images of user interfaces from public, free Community files.”
Admins have control over AI use and content training, which they can turn on or off with two new settings anytime. By default, content training is enabled for Starter and Professional plans and disabled for Organizations and Enterprises. The content training setting takes effect on August 15th, 2024.
“We’re introducing Visual Search to help you more easily find what you’re looking for with a single reference. Search for anything from icons to entire design files with a screenshot, a selected frame, or even a simple sketch with the pencil tool, and Figma will pull in similar designs from team files you have access to. And with improved Asset Search, Figma now uses AI to understand the context behind your search queries. You can easily discover assets — even if your search terms don’t match their names.”