Luis on what the “shadcn-ification” debate is actually about — not the visual uniformity, but the organizational misread: “The mistake isn’t in the ingredient. It’s in thinking that having access to good ingredients is the same as knowing how to cook.”
Stakeholders are concluding that the design system infrastructure is done because a great foundation exists. The teams that have spent years practicing design systems understand exactly why that conclusion is dangerous.
TK Kong shares a detailed guide to his workflow with Claude Code and Paper, the design tool built on native HTML/CSS rather than a WebGL canvas. The workflow of agent writing HTML into Paper frames, designer editing on canvas, and agent implementing code is similar to the Figma MCP workflow covered above, but also allows working with existing designs.
The Paper Snapshot Chrome plugin — which copies live web UIs directly into Paper as editable layers — is exactly what I wanted while wondering why Figma won’t make “a universal “Send to Figma” browser extension”.
Jakub Krehel’s collection of small interface details that compound into a significantly better experience: text-wrap balance, concentric border radius, contextual icon animations, tabular numbers, interruptible animations, optical vs. geometric alignment, and shadows instead of borders. Each one has a live interactive demo. The kind of small details that separate a polished interface from one that just functions.
These details also exist as an installable skill for Claude Code, Codex, and Cursor. Once installed, your AI coding agent automatically applies these principles when building UI.
Yann-Edern Gillet, design engineer at Linear, revisits his “Rosetta Stone” metaphor for design engineering translation through the lens of AI. The central argument: when translation is cheap and instantaneous, the bottleneck shifts from execution to meaning — and the new craft is preserving intent while everything accelerates.
An upcoming interactive exhibit restoring the design moments that shaped software history, from Xerox Alto’s Smalltalk-76 interface to the original “slide to unlock” on iPhone. Each restoration requires reverse-engineering details, motion, and behavior from low-quality assets.
Patrick Morgan makes a clean distinction that vibe-coding discourse keeps blurring: prototype code is for exploration, production code is for endurance. He is building a protected prototyping environment using Claude Code, a place where his team can move fast and then deliberately port the right assets across the boundary into production.
There is a clear parallel with how the design team at Notion works. In the recent episode of How I AI, Brian Lovin showed their collaborative “prototype playground,” where the entire team can create, share, and iterate on functional prototypes.
That also reminded me of how my team worked a decade ago, back when front-end development was a tad simpler. We had a separate “mockups” directory inside the Rails monorepo, where designers prepared static HTML mockups with production-ready CSS and JS. By the time designs were handed off to engineers in a feature branch, all polish and design details were already baked in. The design team must be fairly technical, but there is no going back to handing off Figma files after working this way.
The new Workflow Lab format, showing an end-to-end process, is a smart way to frame the new AI image tools in context. The three new tools (erase object, isolate object, expand image) are genuinely useful for anyone who’s had to leave Figma to do basic cleanup in Photoshop, and Vectorize finally removes a step that’s been a quiet annoyance for years.
A first look at Config 2026 speakers — AI artist Holly Herndon, creator of the world’s first 3D-printed fashion collection Danit Pele, designer and author Vicki Tan, designer and founder Matthew Ström-Aw, and mathematician and educator Grant Sanderson from 3Blue1Brown. Config returns to San Francisco on June 23–25. Virtual registration is free.
Figma explains how its new MCP server lets Codex generate Figma Design files from live code and, in the other direction, use Figma frames as structured context for agentic code generation.
In the previous issue, I wrote about the interior and UI of Ferrari Luce. Raja Vijayaraman was inspired by a temperature knob and rebuilt it for a touch screen. Really nicely done.
Andrew Hogan: “In the AI era, companies need designers more than ever. In fact, our latest study suggests that AI is actually driving renewed momentum in design hiring. We unpack why that is, what hiring managers are prioritizing, and which skills designers need to get ahead.”
To follow up, watch Understanding Today’s Design Job Market, where Andrew speaks with Daniel Wert, CEO of Wert & Co., about this study and what it reveals about the current moment in the market. Together, they unpack the rebound in design hiring, the surge in demand for senior ICs, and the state of junior hiring. Daniel shares what he’s seeing firsthand from running design leadership searches across industries — and where companies may be thinking too short-term.
From design system documentation and PRDs to user research and feedback, Make can now pull in context from across your product ecosystem. Figma added new featured connectors for Amplitude, Box, Dovetail, Granola, Marvin, and zeroheight. You can also connect Make to any remote MCP server by setting up a custom connector.
Once you’ve installed and authorized a Make connector, just hit @ in your Make file and start typing the connector name to pull external context directly into your prototype.
Joey Banks: “…trying Figma Console MCP has completely opened my eyes into what I can offload. Not because it replaces the enjoyable work that I was doing before, but because it handled some of the longer, more repetitive tasks so quickly, and actually so well. Creating 200+ variables took seconds, and mapping them to color swatch instances so the team could preview values was way easier than I expected.”
Alex Barashkov is disappointed by this release, and I have to agree with some of his points. I spend more time in Cursor than Figma lately, and returning to a workspace without AI agents is always hard. In the most recent and relevant example, after importing a few screens from code to Figma, I had to manually replace fonts (no “Selection fonts” for bulk edits, so first had to test a few plugins) and colors (a bit easier but still cumbersome), then abstract repetitive elements into components. While doing this, I kept asking myself why I have to waste time on this when bots can do it in minutes.
“Bringing Claude Code workflows directly into Figma lets developers, designers, and even hobbyists capture a real, functioning UI from a browser — in production, staging, or localhost — and convert it into editable frames on the Figma canvas. Code is powerful for converging — running a build, clicking a path, and arriving at one state at a time. The canvas is powerful for diverging — laying out the full experience, seeing the branches, and shaping direction collectively. Going from code to canvas helps teams move fluidly, so work can narrow when it needs to and open up when it’s time to collaborate.”
My guess is it’s based on the html.to.design technology that Figma acquired last year, which is a huge time saver and an essential part of my toolkit. I haven’t tested Claude Code to Figma yet, but the result in demos looks very similar to what I usually get from the plugin. Which makes me wonder why they limited it to Claude Code instead of making something like a universal “Send to Figma” browser extension?