A similar concept, but inspired by the Lighthouse web dev tool. This plugin ensures your designs are “polished and stakeholder-ready every time, saving you from potential revisions and boosting your design confidence”. 3rd Product of the Day at Product Hunt on December 26th, 2023.
“From lifelike portraits to objects and animals, craft any image you can imagine in just 5 seconds.” Won 2nd Product of the Day at Product Hunt on December 14th, 2023.
An interesting new plugin — get the AI design feedback on either the UX or UI of your mockups. I tried it on an unfinished design, and while some points were somewhat irrelevant, others were spot on. It’s free to play with and worth giving a shot.
A large profile of FigJam AI in Fast Company. “Figma’s AI ambitions are clear. Singer noted that ‘we really do think of AI as playing a central role across the entirety of the platform.’ In theory, that could mean not just better meetings, but more capable coworkers. ‘In a collaborative environment where you’re working with many people on a project,’ Singer says, ‘AI really up-levels everyone.’”
Fascinating side-by-side comparison of Midjourney 5.2 and the newly released version 6. The generative art from Midjourney has always felt more realistic and interesting compared to other services, and now the gap appears even larger.
Midjourney v6 is finally here!!!! 🔥
— Nick St. Pierre (@nickfloats) December 21, 2023
Here are some side-by-sides, --v 5.2 versus --v 6, as well as some new highly detailed prompts and camera angle tests.
These are all unaltered and unedited, straight out of Midjourney.
v6 is a HUGE leap forward
Prompts & examples 👇 pic.twitter.com/uqo6RSqh7y
I wonder what that button on Jordan’s micro keyboard does?
i've never been faster at figma
— jordan singer (@jsngr) December 19, 2023
what does ✨ do? pic.twitter.com/X5gaeAWUsC
Teddy Ni, a co-founder of Magic Patterns backed by Y Combinator, shares an experiment: “We taught an AI some Figma designs of a documentation site. It can now generate similar sites using my own custom components on ANY topic I want, ALL within Figma I can also click on any frame and request updates 🤯”. (This reminds me of the feature in Visual Copilot that I wrote about in #136.)
I think I need to go lie down…
— Teddy Ni (@Teddarific) November 27, 2023
We taught an AI some Figma designs of a documentation site. It can now generate similar sites using my own custom components on ANY topic I want, ALL within Figma
I can also click on any frame and request updates 🤯 pic.twitter.com/QcWBXiXxYN
“Discover how you can import all of your existing Jamboard files to fully editable FigJams! From sticky notes to sketches, watch as we demonstrate how to seamlessly transition your Jamboard content into FigJam, where you can harness the full range of FigJam’s interactive capabilities.”
OMG! Figma window is streamed to GPT‑4 Vision, which then provides feedback on the fly narrated in the voice of Steve Jobs. Looking for a way to make this a part of our design crits.
Steve Jobs is now critiquing my designs directly in Figma!
— Pietro Schirano (@skirano) November 16, 2023
I've just made one of my biggest dreams come true, thanks to GPT-4 Vision + @elevenlabsio. ✨
My Figma window is streamed to GPT, which then provides feedback on the fly.
Like on these new design for @everartai pic.twitter.com/BPX81MmhxH
Jordan Singer came up with another wild AI experiment — sketch or design anything in Figma or FigJam and turn it into a functional prototype with one click of a button. This demo is a great example — it built all the functionality correctly based just on a few text labels!
sketch or design anything and turn it into a functional prototype with the ✨ Build it button inside of @figma and FigJam pic.twitter.com/XFQZjZN1oU
— jordan singer (@jsngr) November 17, 2023
The AI is now integrated into FigJam to “help you instantly visualize ideas and plans, suggest best practices, and, of course, automate tedious tasks, so you can focus on the bigger picture.” What started as an experimental widget Jambot is now a first-class part of the product using GPT‑4. I used it this week to create a structure for a presentation which was a useful 0 to 0.1 progression. You can give it a try at the playground or check out how Zander Whitehurst uses it to create crazy flow charts.
I shared the teaser in issue #132, but last week Builder.io introduced Visual Copilot, “a completely reimagined version of the Builder Figma-to-code plugin that will save developers 50–80% of the time they spend turning Figma designs into clean code.” The major difference between Visual Copilot and previous design-to-code tools is a specialized AI model that was trained to solve this problem. The features include one-click conversion, automatic responsiveness, extensive framework and library support, customizable code structures, and easy integration with the existing codebase.
One of the most exciting parts of this announcement is still in private beta and targeted at teams with well-maintained design systems. This feature in Visual Copilot uses AI to map reusable components in your Figma file to those in your code repository and generates code using your existing components when you have them. This could be genuinely useful to get the first rough version ready in no time.
I’m still waiting for access to GPT‑4 Vision, but examples like this make me so excited about possible use cases! Imagine using LLM for a heuristic evaluation or pairing design and sketching sessions.
Omg I'm blown away! 🤯
— Ammaar Reshi (@ammaar) October 4, 2023
GPT-4V is an incredible product design partner! I gave it a mockup of my site & asked for feedback.
It was able to suggest tweaks to type, layout, content, and more.
What an awesome way to pair on solo projects together or if you're learning the craft! pic.twitter.com/EujmjwG7nA
Alright, so in the last issue, I wrote that “plugins for code generation in Dev Mode using GPT‑4 might provide an even better result.” It didn’t take long for a better example! Ben shows a new feature in Sidekick AI (using GPT‑4) for dropping a link to a frame in a Figma file to improve the code generation and even fix visual bugs. This looks freaking amazing.
implementing production ready ui with figma + gpt-4 vision 🤯 pic.twitter.com/650dAXMSFm
— ben (@benhylak) October 4, 2023
You can now opt out of beta AI features, so your data will not be sent to Figma’s third-party AI vendor. “Figma’s agreement with OpenAI provides that data is not to be used for model training. Data inputted into AI features is sent to OpenAI for processing and generating AI output. Data is temporarily retained in OpenAI’s environment to provide the services, however it is not used for model training.”
This result is based on an exported PNG, so plugins for code generation in Dev Mode using GPT‑4 might provide an even better result.
ChatGPT Vision can take in screenshots from Figma and generate code.
— Mckay Wrigley (@mckaywrigley) September 29, 2023
Building with AI is getting wild. pic.twitter.com/D8yeJW1kGR
Dylan Field, founder and CEO of Figma, looks at the relationship between designers, developers, and AI, in conversation with a16z’s David George. In the process, he also demoes Jambot, their new AI widget for FigJam. Love this quote from Dylan: “It [AI] will lower the floor for who’s able to participate in the design process, but also raise the ceiling of what you can actually do.”
Miggi causes the “figception” by using Jambot to come up with ideas for making Figma content. Quite amazing to see how it can be used as a tool for thought (or even programming, FWIW).
Jambot is a free widget from Figma to interact with ChatGPT right in FigJam. Use it to create visual mindmaps, take a multi-threaded approach to brainstorming, or generate ideas with teammates and ChatGPT on the same canvas. Quite amazing that it was born during last month’s Maker Week and is already live!
Amber Bravo sat down with Jambot engineers and a designer to learn what inspired them to make the widget, and why they’re so excited to see ChatGPT go multiplayer. As a power user of Logseq, I loved this bit from Daniel Mejia on where the inspiration came from: “I’ve been a heavy user of these tools called Networked Thought — especially Roam Research and Logseq — which basically allow you to create pages that link between each other, so you can connect, organize, and trace ideas. More recently, I also found this tool called Albus, which adds a visual feel to interacting with AI, and so I thought there should be a way to connect these concepts to create a potentially useful alternative to ChatGPT.”