“There’s a lot of buzz about AI agents. Robots that do more with less supervision—what could go wrong? We asked our community how this might shake up how we think about UX.”
“WaveMaker AutoCode is an AI-powered WaveMaker plugin crafted for Figma, enabling product teams to jump from design to code in an instant.AutoCodeproduces a fully‑functional app, with pixel-perfect translation of design elements, app navigations, and intended interactions.” (See the official press release for more details.)
Anima has been working on design-to-code tools since before the recent AI craze. A few months ago, they added support for shadcn/ui components, which I tried last week on my current project designed with this library.
Unlike v0, they parse the Figma file and get a lot of details right. I was impressed with how accurately it selected shadcn/ui components, even if layers weren’t explicitly named or instances were detached in the mockup. It becomes obvious that parsing a file is the right approach when different components look the same on the surface. For example, the trigger for opening a dropdown or date picker uses the same button, but they are different Figma components under the hood, and Anima chose their counterparts in code correctly.
Exporting custom colors and typography variables to a Tailwind config is also a nice touch. I ran into a few issues with excessive Tailwind styling and newer shadcn/ui components like the Sidebar not being recognized, but overall, this clearly feels like the right direction.
Vercel shares best practices on importing your designs from Figma to v0 and working with shadcn/ui. I was excited about this integration until I realized it simply exports the Figma frame as an image and passes it to v0’s AI vision. Information about Auto Layout, spacing, color tokens, and typography is not preserved from Figma but inferred from the image. That’s fine for rough prototypes, but there is a better way.
Two new AI features — quickly search through top Community files to find assets you need and increase the resolution and clarity of your images in just one click in the editor.
The new Lovable and Builder.io integration lets you turn Figma designs into full applications. Lovable is a full-stack AI software engineer and editing environment. It’s designed to let you quickly create and iterate on your projects so you can move from an idea to a real application, deployed live, without wrangling complex tools or environments. AI-Powered Figma to Code is Builder.io’s toolchain that leverages AI models to convert Figma designs into React, Vue, Svelte, Angular, HTML, etc code in real-time. It can even reuse your existing components in the output and use any styling library or system you need.
So, by using the integration, you can convert Figma mockups into code with Builder.io and then open them in Lovable, where you can add new functionality or connect it to the real data from Supabase. Soon, you’ll be able to update your app in Lovable whenever designs change in Figma. AI will merge the design changes while keeping all your custom code intact. (Unrelated, this combo was most recommended in answers to this question about the best AI tool for turning designs into a website.)
Vincent van der Meulen, Design Engineer at Figma, talks about Figma’s approach of complementing designers rather than replacing them as a part of the SaaStr AI Summit panel. They follow four key AI principles: improve existing user behaviors, embrace frequent iteration, systematic quality control, and foster cross-functional collaboration.
Speaking of shadcn/ui, Matt Wierzbicki published a new plugin using Claude 3.5 Sonnet (requires an API key) to convert Figma designs into production-ready shadcn/ui and Tailwind CSS code. It’s tailored to work best with his commercial shadcn/ui kit for Figma, but I’d expect it to work with Luis’ kit as well.
Moty Weiss shares his experience capturing and preserving brand consistency with AI illustrations. The idea of analyzing existing brand illustrations with ChatGPT to create a foundational prompt for Midjourney really stood out to me. The resulting illustrations adhere to the brand style and have a unique voice, looking very different from the AI-generated images flooding the internet. While they still need some work, the new tools are truly empowering: “While Midjourney’s results may still require final touches — such as vector conversion, line refinement, detail enhancement, and final polish from a professional illustrator — it represents a significant step toward independence for designers who struggle with illustration.”
A new plugin from Meng To turns Figma designs into production-level code with the power of Claude AI and GPT-4o. I mentioned it in the last newsletter, and it looks very promising so far. The plugin is free, but you’ll need to bring your own API keys.
Watch the video where Meng explains his Figma to SwiftUI code workflow.
AI is a big help in developing software, but this plugin takes it to another level: “Artifig is an AI-powered Figma plugin that empowers anyone to build their own Figma plugins using just natural language. No coding needed — simply describe what you want, and watch as your idea transforms into a fully functional, real-time plugin.” See examples in a thread from one of the authors.
“An AI assistant that does the boring stuff for you. MATE supports you in your small boring tasks, allowing you to focus on the not boring things. Ask it to rearrange elements, create a color palette, change the stroke for hundreds of items, apply random opacity to selected items, rename variables, and much more.” Watch the demo.
If you’re curious about the new wave of AI-based development tools, I found this review of Cursor quite insightful: “A few months into using Cursor as my daily driver for both personal and work projects, I have some observations to share about whether this is a “need-to-have” tool or just a passing fad, as well as strategies to get the most benefit quickly which may help you if you’d like to trial it. Some of you may have tried Cursor and found it underwhelming, and maybe some of these suggestions might inspire you to give it another try.”
Unblocked is a new image editing plugin powered by AI for generating images and vector graphics, erasing objects and backgrounds, adding generative fills, vectorizing images, and turning vectors into 3D renders.
Paint any object or person in an image to remove it completely.
“After months of iterative development, including a closed beta and continuous refinement using our eval plugin, we were ready for a broader launch. Looking back, shipping this work was guided by four key principles: 1) AI for existing workflows: We applied AI to streamline tasks that users already perform, like file browsing and copying frames into their current file. 2) Rapid iteration: We continuously shipped updates to our staging environment, using insights from our internal beta to refine features. 3) Systematic quality checks: We developed custom evaluation tools to monitor and improve search result accuracy. 4) Cross-disciplinary teamwork: Our success stemmed from close cooperation across product, content, engineering, and research.”
Misha Frolov provides an overview of how the new AI tools change the workflow.
Co-founders of Sketch shared their stance on AI: “We’re not ready to make a move with AI just yet — for reasons that will become clear. However, we wanted to share the principles that will guide our approach when that time comes.”
I respect their position on using AI to aid designers but never to create designs. To me, Make Design was the least exciting AI feature announced at Config, and I’m glad it was reframed as the First Draft during the relaunch a few weeks ago. Their focus on privacy and being local-first is a smart way to differentiate from Figma and offer something unique, even if that required burying Sketch Cloud first.
The AI feature Make Designs is back under a new name, First Draft, which I greatly prefer as it sets more accurate expectations. (Curiously, that was the original internal project name.) “We’re also introducing some key updates, like letting you choose from one of four libraries depending on your needs — whether it’s a wireframing library to help you sketch out less opinionated, lo-fi primitives, or higher-fidelity libraries to provide more visual expressions or patterns to explore.”
I believe that wasn’t previously shared: “Our vision is for First Draft to extend beyond our current libraries and allow organizations to incorporate their own custom libraries. In the future, teams will be able to draft ideas using their company’s unique design language without having to sift through hundreds of components by hand.”