“Claude Design can read a design system carefully when the prompt is about the system. When the prompt is about a composition that uses the system, it stops respecting the components and just generates lookalikes.” TJ Pitre spent a few hours testing Claude Design against two real design systems and concludes that the tool references your system, but doesn’t consume it. Claude would happily inline HTML tags with style props instead of importing them from your component library.
“The pitch for Claude Design’s workflow is roughly: I have a design system, I want to generate new product surfaces from it, and I want AI to do most of the lift. That workflow exists today. You can pair Figma with an MCP server like our Figma Console MCP, or with Figma’s own MCP server, or with Code Connect, and then point an AI app generator at it. Lovable, v0, Replit, Figma Make, Claude Code working inside your repo. Your Figma file stays the canonical source. Your codebase stays the production surface. AI does the generation in between.That flow is more linear, more honest about where source of truth lives, and it produces output that actually uses your component library, because the AI is operating inside the repo where the components live.”
A useful companion to Google’s announcement above. Meng To shares 15 takeaways from actually using the format: when to Remix vs. Iterate, how to treat DESIGN.md as “reusable project memory,” and why curation is part of the design process. The most actionable takeaway: “Start with DESIGN.md, generate the first design, remix and expand it, create section variations, move into a builder, then assemble the full site.” Don’t miss his video tutorial on turning a DESIGN.md into landing pages, mobile screens, and motion design.
DESIGN.md encodes your design rules, colors, typography, and component preferences in a plain Markdown file that AI agents can parse and validate against. Google moves DESIGN.md from a Stitch-specific feature to open-source format specification. If the format gains traction across tools, it could become the missing link between design systems and AI generation pipelines.
“A state-of-the-art image model that can take on complex visual tasks and produce precise, immediately usable visuals, with sharper editing, richer layouts, and thinking-level intelligence.” OpenAI’s second-generation image model promises a step change in instruction following, precise object placement, dense text rendering, and cross-aspect-ratio generation.
“Designers have always (and will always) answer the question ‘What’s worth making?’ ” Joel Lewenstein, Head of Design at Anthropic, argues that as the cost of software drops, the most important decisions shift from “can we make this?” to “should we?” Design, in his framing, is what narrows the possibility space fast enough to keep up with the speed of delivery. He describes Claude Design as a tool for getting ideas “good enough to move discussion forward,” cutting idea-to-internal-feedback time from days to hours.
Karri Saarinen, CEO of Linear, writes one of the more grounded takes on AI’s current state. Linear’s cloud agent now fixes more than 1,000 issues per month, but Karri is clear that hard problems remain hard and design tools are still challenging to use. On having a design tool operate directly on the production codebase: “A lot of the design work I do is not production design. I am not trying to implement the final version or test every edge case. Most design work is about making decisions, understanding the problem, and finding the fit. That process generates many variations and messy ideas.”
The expertise paradox section is the most useful: “AI often feels most impressive in domains where you know the least.” Expertise makes AI harder to use but also more valuable, because experts know how to steer, constrain, and evaluate the output.
Kris Puckett, Design Manager at Stripe, spent months building Epilogue, a real iOS app with 14,000 lines of Swift, entirely through conversation with Claude. This essay is a specific and honest account of what the designer-building-with-AI experience actually looks like: what broke, what he learned about asking precise questions, what “vague frustration keeps you stuck, specific confusion gets you answers” actually looks like in practice. “I realized the bottleneck was never coding ability. It was articulation. The ability to describe what I wanted clearly enough that something else could build it.”
Figma opens the canvas to agents. The use_figma MCP tool lets Claude Code and Codex generate and modify designs grounded in your actual design system. The key distinction from earlier code-to-design experiments: agents work with what your team has already built, making design system quality a direct input to AI output quality.
Mike Davidson runs the largest design team at Microsoft AI, and shares tips on making it through 2026. The assembly layer of design is being absorbed — button states, data processing, detailed specs. What remains, and what companies are hiring for, is orchestration: running AI and human teams toward a shipping goal. Specific advice on portfolio, job search strategy, and skills worth building.
Worth noting Mike’s scepticizm about data in the above report: “Perhaps the data reflects reality, or perhaps design jobs aren’t accurately tracked by this company, but either way, this is not what I or a lot of my colleagues at other companies are seeing. If anything, most cross-functional teams are more underwater on design than on other functions.”
Luis on what the “shadcn-ification” debate is actually about — not the visual uniformity, but the organizational misread: “The mistake isn’t in the ingredient. It’s in thinking that having access to good ingredients is the same as knowing how to cook.”
Stakeholders are concluding that the design system infrastructure is done because a great foundation exists. The teams that have spent years practicing design systems understand exactly why that conclusion is dangerous.
TK Kong shares a detailed guide to his workflow with Claude Code and Paper, the design tool built on native HTML/CSS rather than a WebGL canvas. The workflow of agent writing HTML into Paper frames, designer editing on canvas, and agent implementing code is similar to the Figma MCP workflow covered above, but also allows working with existing designs.
The Paper Snapshot Chrome plugin — which copies live web UIs directly into Paper as editable layers — is exactly what I wanted while wondering why Figma won’t make “a universal “Send to Figma” browser extension”.
Jakub Krehel’s collection of small interface details that compound into a significantly better experience: text-wrap balance, concentric border radius, contextual icon animations, tabular numbers, interruptible animations, optical vs. geometric alignment, and shadows instead of borders. Each one has a live interactive demo. The kind of small details that separate a polished interface from one that just functions.
These details also exist as an installable skill for Claude Code, Codex, and Cursor. Once installed, your AI coding agent automatically applies these principles when building UI.