“Code Connect UI lets you map design components in your Figma libraries to the corresponding code components in your repository. These mappings enhance the Figma MCP server by giving AI agents direct references to your code, enabling more accurate implementation guidance.”
A summary of everything Figma announced at Schema to help teams design for the AI era. Extended collections are a new way to manage multi-brand design systems, where authors can release a simple whitelabeled version of their design system that designers across the company can extend with their own themes, publish, and reuse. Slots let you add your own layers within instances and easily specify which instances a slot accepts, allowing for both increased usability and compliance with your design system. Check designs linter matches your raw values with their corresponding variables. Finally, the team completed a massive rewrite of the architecture for massive performance gains.
In addition to new design features, Figma has been working hard to bring context from your codebase into your design system. With the new Code Connect UI, users can connect Figma directly to their GitHub repositories and use the new AI suggestions feature to quickly find the right code file to map to Figma components — no coding necessary. The MCP server is out of beta and generally available — now you can add guidelines for how AI models should adhere to your design system. Make kits let you generate React code components and CSS files for your styles and variables, then package those outputs for use in Figma Make. Additionally, Figma announced NPM package imports, native importing and exporting of variables, simplifying authoring experience for collection, and increased variable modes.
Earlier this year, Grammarly acquired collaborative workspace Coda and email app Superhuman, with the CEO from Coda stepping into the role of CEO for Grammarly and the entire company later changing its name to Superhuman. Smith & Diction developed an interactive identity with motion design at its core, while the brand architecture of the new company was evolving every day. The new icon system by Helena Zhang and custom pattern generators made in Figma Make show an incredible attention to detail from this team.
Dylan Field on the newest addition to Figma’s product line: “Figma has acquired Weavy, a platform that brings generative AI and professional editing tools into the open canvas. As Figma Weave, the company will help build out image, video, animation, motion design, and VFX media generation and editing capability on the Figma platform.”
In A Match Made in Heaven, Weavy’s early investor, Ben Blumenrose from Designer Fund, shared three key features of their product approach that make for a very powerful tool — being model agnostic, exposing process, and working as an aggregator.
Dave Martin, Security Engineer at Figma, shares his experience building Response Sampling, a system designed to detect potential sensitive data leaks in real time. “By providing ongoing visibility into the data leaving our services, Response Sampling gives our teams the opportunity to investigate and address issues quickly, reducing the risk of exposure and improving our confidence in how data is handled.”
Three new features that deepen customization and control in Figma Buzz: configurable marketing templates using component properties, video trimming directly in Buzz, and easy access to plugins that help with digital asset management, translation, animation, and more.
“With new multi-account support on mobile, quickly move between accounts without logging out or losing progress. Stay signed in, get notifications from all your accounts, access deep links with ease, and collaborate seamlessly on-the-go.”
Christine Vallaure walks readers through her Figma workflow — how she combines everything, thinks through a project, and turns all those features into a working and maintainable file.
I shared some of my thoughts on Liquid Glass in issue #229, so it was refreshing to see how Linear approached the new design language. Couldn’t agree with this more: “The one effect we chose not to reproduce was Liquid Glass’s refraction. Technically, it requires access to pixel-level data that isn’t available to third-party developers. Aesthetically, it also wasn’t the right choice because refraction can make dense professional interfaces harder to read. By relying on precise blurs, masking, and lighting, we maintained a sense of depth without losing clarity.”
Sara Clayton from Dropbox shares some of her recent lessons and observations about experimenting with AI: “the design-to-engineering bridge is improving but not yet seamless, weak systems and shortcuts will be surfaced rather than hidden, true progress depends on internal champions who can push the boundaries, and – above all – critical thinking remains the foundation of good design.”
“Now you can push your Make project directly to a new GitHub repository. Back up your code, track version history, and keep building in your preferred development tools. Push ongoing updates from Make to your GitHub repository whenever you make changes.”
Molly shares examples of when to reach for “inverse” color tokens and why to avoid just going with “white”.
Speaking of shadcn, Vercel launched a free course on the fundamentals of modern UI development with shadcn/ui. I’m happy to see a high-quality introductory resource for teams adopting this stack, as a mental shift from building with homegrown intertwined components to a composable, reusable, and themeable library could be challenging.
A great resource for front-end engineers from Vercel, authored by shadcn and Hayden Bleasel: “Modern web applications are built on reusable UI components and how we design, build, and share them is important. This specification aims to establish a formal, open standard for building open-source UI components for the modern web.”
As new tools blur the lines between design and engineering, I strongly believe that any designer working on or contributing to a design system will benefit from understanding these concepts.
Erik D. Kennedy attempts to answer two questions: Will AI take design jobs? If so, which ones? And in light of that, what should designers focus on? Love this advice: “I’d recommend steering your own designs away from the hallmarks of UI-by-AI: Inter, cards displayed in parallel, everything being 8px rounded, etc. The time to know your brand, know your audience, know the problem you’re solving, and lean way in starts now.”
“Winners from our first global Make-a-thon offer insights on how to prototype smarter, structure products better, and push Figma Make further.”
“Starting today, the Figma app in ChatGPT will be able to recommend and create AI-generated FigJam diagrams based on your conversation. Users can also upload files like photos, drawings, and PDFs to guide the output. That currently includes text-based flow charts, sequence diagrams, state diagrams, and Gantt charts, with more to come. […] To use the Figma app, simply mention it in your ChatGPT prompt, i.e., “Figma, make a diagram from this sketch.” ChatGPT can also suggest the Figma app when it’s relevant to the conversation.”
Luke Wroblewski observes how AI coding agents flipped the traditional software development process on its head. Design teams used to stay “ahead” of engineering, but now engineers move from concept to working code 10x faster.
“So scary time to be a designer? No. Awesome time to be a designer. Instead of waiting for months, you can start playing with working features and ideas within hours. This allows everyone, whether designer or engineer, an opportunity to learn what works and what doesn’t. At its core rapid iteration improves software and the build, use/test, learn, repeat loop just flipped, it didn’t go away.”
Raluca Budiu from Nielsen Norman Group with a sobering critique of Apple’s new visual language: “iOS 26 brings Liquid Glass controls laid over noisy backgrounds, jittery animated buttons, shrunken and crowded tab bars, collapsing navigation, and ubiquitous search bars. On top of that, it breaks long‑established iOS conventions, getting closer to Android design. Overall, Apple is prioritizing spectacle over usability.”
On transparency: “One of the oldest findings in usability is that anything placed on top of something else becomes harder to see. Yet here we are, in 2025, with Apple proudly obscuring text, icons, and controls by making them transparent and placing them on top of busy backgrounds.”
On animations: “Our eyes are finely tuned to detect motion, which is why animated buttons grab attention instantly. But delight turns into distraction on the tenth, twentieth, or hundredth time. […] It’s like the interface is shouting “look at me” when it should quietly step aside and let the real star — the content — take the spotlight. […] Motion for motion’s sake is not usability. It’s distraction with a side of nausea.”
I was looking forward to this update since WWDC, but it left me increasingly annoyed and disappointed. From hidden actions in Safari to blurred content with jittery transitions in Mail, everyday experiences require more attention and extra steps on my part without giving anything in return. Liquid Glass feels like an ultimate departure from Steve Jobs’ “design is not just what it looks like and feels like, design is how it works” motto.
“Rolling out to Enterprise plans over the next few weeks, Organization admins can now enable or disable AI features for individual workspaces. When toggled on, AI functionality will be available in all files within that workspace.”