Andrew Hogan, Head of Insights at Figma: “With AI and the momentum around “just doing things,” we’re embracing experimentation and building at an eye-watering pace. Still, it’s up to us to steer these tools in the right direction—and if history is any guide, the most valuable innovations may be just around the corner.”
Figma explores five key takeaways from the report, and what they say about the state of design and development: agentic AI is the fastest growing product category; design and best practices are even more important for AI-powered products than traditional ones; smaller companies are going all in; designers are less satisfied with the output of AI tools than developers; there are still questions about how to use AI to make people better at their role.
If you’re curious about the new gpt-image‑1 model, check out this announcement from OpenAI: “Today, we’re bringing the natively multimodal model that powers this experience in ChatGPT to the API via gpt-image‑1, enabling developers and businesses to easily integrate high-quality, professional-grade image generation directly into their own tools and platforms. The model’s versatility allows it to create images across diverse styles, faithfully follow custom guidelines, leverage world knowledge, and accurately render text—unlocking countless practical applications across multiple domains.”
The new Edit Image feature allows changing an image using prompts, powered by gpt-image‑1. The Make an Image feature got an AI model picker so users can choose between gpt-image‑1, Gemini Imagen 3, or Titan V2. Additionally, the AI beta was rolled out to all Professional, Organization, and Enterprise plans. If you’re not seeing AI features, check that your Admin has your AI access toggle turned on.
After reading Lenny’s Newsletter for a few years, I’ve recently changed to an annual subscription to benefit from the incredible value of this bundle. In addition to free annual plans of great productivity tools Linear, Notion, Perplexity Pro, Superhuman, and Granola, the bundle now also offers the hottest AI tools Bolt, Lovable, Replit, and v0.
“Superflex helps you write front-end code from Figma, images and prompts while matching your coding style and utilizing your UI components.”
Bold moves from Shopify’s CEO Tobi Lutke, shared in an internal memo. On general AI usage: “Using AI effectively is now a fundamental expectation of everyone at Shopify. It’s a tool of all trades today, and will only grow in importance.”
On prototyping: “AI must be part of your GSD Prototype phase. The prototype phase of any GSD project should be dominated by AI exploration. Prototypes are meant for learning and creating information. AI dramatically accelerates this process. You can learn to produce something that other team mates can look at, use, and reason about in a fraction of the time it used to take.”
AI skills will be a part of the performance reviews and affect future hiring. Highly recommend reading an entire thing.
Karri Saarinen from Linear: “Prompting is essentially like writing a spec, sometimes it’s hard to articulate exactly what you want and ultimately control the outcome. Two people looking for the same thing might get wildly different results just based on how they asked for it, which creates an unprecedented level of dynamism within the product. This shift from deterministic traditional UI to something more unbridled raises a challenge for designers: with no predictable journeys to optimize, how do you create consistent, high-quality experiences?”
Nick Babich explores his process of turning design into code using Lovable and Anima and shares the pros and cons of each tool.
Great post by an industry veteran Mike Davidson, offering a few suggestions to those feeling behind the AI wave already: “When it comes down to it, your future in design is the sum of all of your actions that got you here in the first place. The skills you’ve built, the artifacts demonstrated in your portfolio, your helpfulness as a teammate, your reputation as a person, and now more than ever, your curiosity to shed your skin and jump into an undiscovered ocean teeming with new life, hazards, and opportunity. Someone will invent the next CSS, the next Responsive Design, the next sIFR, the next TypeKit, the next IE6 clearfix, and the next Masonry for the AI era. That someone might as well be you.”
Karri Saarinen: “The idea that AI might ruin visual quality feels like a non-issue since there wasn’t much quality to ruin in the first place. […] My general view of AI is that it will just let us do more things, not take away things.”
“This project implements a Model Context Protocol (MCP) integration between Cursor AI and Figma, allowing Cursor to communicate with Figma for reading designs and modifying them programmatically.”
Tia Sydorenko argues that our interactions with digital systems “are not just changing; they are shifting in their very essence.” She builds her argument on this insight from Jakob Nielsen: “With the new AI systems, the user no longer tells the computer what to do. Rather, the user tells the computer what outcome they want.”
“Unlike straightforward direct manipulation — such as dragging a file between folders, where actions unfold step by step — AI interactions demand a more fluid, iterative process. Users articulate their goals, but instead of executing every step manually, they collaborate with the system, refining inputs and guiding the AI as it interprets, adjusts, and responds dynamically.”
Visual Electric is now available as a Figma plugin! It’s the first image generator built for designers, so you can ditch stock photography and generate precisely what you need. Requires an account; the free plan includes 20 image generations per month.
“Join Anton Osika (Lovable co-founder), Nad Chishtie (Design @ Lovable) & Steve (Builder.io co-founder) on a livestream where they’ll talk about Builder.io’s new Lovable integration that lets you turn Figma designs into Lovable apps.”
Xinran Ma walks through the creation of an AI automation that instantly categorizes Figma comments and generates a structured summary in Google Docs.
Carly Ayres asks the Figma community to weigh in on Andrej Karpathy’s “vibe coding.” “Perhaps the question isn’t whether vibe coding will replace traditional development—it won’t—but rather how it might expand who can build software and how we build it.”
“There’s a lot of buzz about AI agents. Robots that do more with less supervision—what could go wrong? We asked our community how this might shake up how we think about UX.”
“WaveMaker AutoCode is an AI-powered WaveMaker plugin crafted for Figma, enabling product teams to jump from design to code in an instant.AutoCodeproduces a fully‑functional app, with pixel-perfect translation of design elements, app navigations, and intended interactions.” (See the official press release for more details.)
Anima has been working on design-to-code tools since before the recent AI craze. A few months ago, they added support for shadcn/ui components, which I tried last week on my current project designed with this library.
Unlike v0, they parse the Figma file and get a lot of details right. I was impressed with how accurately it selected shadcn/ui components, even if layers weren’t explicitly named or instances were detached in the mockup. It becomes obvious that parsing a file is the right approach when different components look the same on the surface. For example, the trigger for opening a dropdown or date picker uses the same button, but they are different Figma components under the hood, and Anima chose their counterparts in code correctly.
Exporting custom colors and typography variables to a Tailwind config is also a nice touch. I ran into a few issues with excessive Tailwind styling and newer shadcn/ui components like the Sidebar not being recognized, but overall, this clearly feels like the right direction.