Damien Correll, VP Design, Brand & Creative at Figma, gives a behind-the-scenes look into how the brand studio team uses the recent Gemini 2.5 Nano Banana update to create realistic mocks.
Erik D. Kennedy attempts to answer two questions: Will AI take design jobs? If so, which ones? And in light of that, what should designers focus on? Love this advice: “I’d recommend steering your own designs away from the hallmarks of UI-by-AI: Inter, cards displayed in parallel, everything being 8px rounded, etc. The time to know your brand, know your audience, know the problem you’re solving, and lean way in starts now.”
The official Figma MCP server now supports Gemini CLI, OpenAI Codex, and Atlassian is coming soon.
Tommy Geoco shows his workflow for building Lorelight with Figma and Claude Code.
“Starting today, the Figma app in ChatGPT will be able to recommend and create AI-generated FigJam diagrams based on your conversation. Users can also upload files like photos, drawings, and PDFs to guide the output. That currently includes text-based flow charts, sequence diagrams, state diagrams, and Gantt charts, with more to come. […] To use the Figma app, simply mention it in your ChatGPT prompt, i.e., “Figma, make a diagram from this sketch.” ChatGPT can also suggest the Figma app when it’s relevant to the conversation.”
Luke Wroblewski observes how AI coding agents flipped the traditional software development process on its head. Design teams used to stay “ahead” of engineering, but now engineers move from concept to working code 10x faster.
“So scary time to be a designer? No. Awesome time to be a designer. Instead of waiting for months, you can start playing with working features and ideas within hours. This allows everyone, whether designer or engineer, an opportunity to learn what works and what doesn’t. At its core rapid iteration improves software and the build, use/test, learn, repeat loop just flipped, it didn’t go away.”
Dylan joins the TBPN show to chat about evaluating new AI models, the trajectory of Figma Make, and why human judgment and taste still matter even as AI accelerates execution. They also discuss leadership, his views on open-source models and emerging hardware, and MCPs.
“Rolling out to Enterprise plans over the next few weeks, Organization admins can now enable or disable AI features for individual workspaces. When toggled on, AI functionality will be available in all files within that workspace.”
Dylan Field shows a couple of projects he built in Figma Make with pre-release Sonnet 4.5. He notes that the new model is very good at planning and was able to precisely transform a Figma design into a functional code with a single prompt.
Designer Advocate Brett McMillin is joined by members of the Figma AI and Anthropic teams to discuss the integration of Claude Sonnet 4.5 into Figma Make. This new model from Anthropic is praised for its significant improvements in design outputs, reasoning through updates, and overall performance.
Wes Bos from the Syntax podcast and Adam Wathan from Tailwind CSS dig into why every single website AI puts out is purple.
Another demo of using the new MCP server with Claude Code.
Watch Lee Robinson go from design to code with GPT-5-Codex and Agent.
Thomas Lowry, Director of Advocacy at Figma, shares three best practices for designers to give developers—and the AI agents they use—the context they need to go from design to production.
An official catalogue of agentic tools supporting context from the new Figma MCP server.
Kris Rasmussen, CTO of Figma: “Today we’re announcing updates to the Figma MCP server and Code Connect that make it possible to bring Figma design context anywhere you work—whether it’s in your IDE, your AI agent, or your prototypes. These updates make your design context—context about how your design system is structured, how your codebase is written, and how your team builds products—more portable and powerful, helping you move from idea to product with less friction.”
This update includes three major releases. Remote access to the Figma MCP server from your IDE, an AI coding agent, or a browser-based model. Bring the underlying code from a Figma Make file to your codebase using the Figma MCP server. Code Connect’s new in-app mapping experience lets you browse components inside Figma, map them to the right code and file, and see which are linked or missing.
Speaking of Cursor, Lee Robinson recorded a six-part video series on AI foundations. It’s designed for beginners to learn concepts like tokens, context, and agents. The entire course is free and just 1 hour long.
Peter Yang interviews Meaghan Choi, a design lead for Claude Code, about her Figma-to-code workflow and the top 3 design use cases for Claude Code.
Figma announced early access to the limited alpha of editing designs using text prompts on the canvas. I’m stoked about this release as it will make experimenting, trying new ideas, and exploring alternatives so much faster.
“You can now attach up to 6 reference images when prompting AI to make or edit an image. This can help match brand style, add specific objects, or create similar images based on existing assets.”