Logan Kilpatrick, who leads AI Studio and Gemini API at Google DeepMind, revealed something wild when we interviewed him during a recent episode of The Neuron podcast, which I co-host with Corey Noles.
Kilpatrick says you can take screenshots of websites, drop them into AI Studio, type “clone this,” and get a working version in seconds. In fact, that’s how Google’s AI team prototypes its own products. He literally takes a screenshot of AI Studio, spins up a clone, then iterates on it.
This is something you could do, too, right now! That’s because AI Studio is completely free and already available to you to experiment with.
Actionable gold from this episode about secret AI tools
In this Neuron podcast episode, Kilpatrick pulled back the curtain on Google’s entire AI ecosystem and showed us exactly how to use it.
Getting started with AI Studio
(3:31) How AI Studio has evolved: It started as a playground tool, but… “we’ve moved on beyond that… we released this build tab where you can go in and actually have the model build you an entire app.”
(4:42) His personal AI Studio workflow: “Prototyping of AI experiences is is how I spend most of my time. It’s like, ah, wouldn’t it be cool if you could do this thing with AI in a product?…I can just put in a really simple prompt and it just like is connected to my Gemini API keys and it’s connected to Gemini and all of our other models by default out of the box…It’s just like baked in.”
(46:03) His go-to first project: “Clone your favorite website. I take a screenshot of AI Studio, put it back in AI Studio, and I say clone this.”
(16:59) AI Studio’s sweet spot for your workflow: “We’re not trying to solve every problem… with all the vibe coding stuff we’re doing and with build mode, we’ll get you a working prototype and then get you out into a full-fledged professional developer product.”
Real-world applications of AI Studio you can try today
(21:43) Text-to-speech applications: “There’s been a huge amount of demand… for very native human sounding text-to-speech video generation.”
(22:18) Make your own AI podcast in your voice: “You could make your own AI podcast in your voice… there’s actually not a big gap between being able to do that and what Notebook LM is doing.”
(26:43) AI tutor for learning: “Having an AI tutor that could help me… would have been so valuable and not spend my time getting yelled at on Stack Overflow.”
(30:50) A surprising lawyer adoption stat: “99% of lawyers had tried Gemini, Claude, or ChatGPT.” Why Gemini? “…they just want long context to bring more documents.”
(36:04) Build a food tracking that actually works: “Snap a picture and have the model deeply understand… using bounding boxes and size proportional understanding.”
(42:04) Medical applications with MedGemma: “I talk to customers all the time who are blown away with MedGemma… one of the highest traction query use cases in ChatGPT was this medical use case.”
AI Studio / Gemini’s most underrated features?
Deep Research (47:04): “The model will go visit between tens and hundreds or even thousands of different web pages… it’s a mind-blowing experience.” (P.S.: In case you forgot, Google invented Deep Research.)
The Live API (20:54): “All the context you need to solve the problem is visible on my screen… the AI system doesn’t need access to other tools.” (It looks like OpenAI is competing on this vector now too.)
Developer and technical insights
(5:45): Google Cloud integration: “AI Studio is built on all the Google cloud infrastructure… deploy uses Cloud Run behind the scenes.”
(17:31): Large codebase challenge: “There’s lots of context buried in different places… it’s really difficult to orchestrate this together.”
(18:08) On context engineering: “Not based on the user prompt… it’s trying to do optimization on getting the relevant data to actually help answer.”
(39:28) Firebase Studio as another solution: “Firebase Studio… you get the batteries included with compute and storage… should be able to get it to do image understanding with Gemini through Firebase, get a database, all that stuff.”
(40:20): Google’s open source strategy and why Gemma (their small open models) exists: “It’s us meeting developers and customers where they are.”
Where all this goes from here
(8:26) On Genie 3 and world models: “Fully functional world models that have temporal consistency where you can make edits to the world… it’s not like there’s a file somewhere being referenced… it is actually on the fly generating the actual ecosystem.”
(13:03) Logan’s AGI reality check: “The models can’t follow basic instructions of chess… they want to make all these illegal moves…I could probably teach a 12-year-old how to play chess in 30 minutes and they’d be able to pick it up.”
(43:35) What’s Logan most bullish about with AI? Vibe coding: “something that was reserved for a small group of people is now able to be done by a much larger group…The amount of software in the world is going to be a million x what it is right now in 10 years.”
Spiciest take
(23:07): “The thing people were most compelled by [in Notebook LM] was the humanness of the scripts… There’s no secret sauce… It’s literally prompting.”
The AI podcast phenomenon that broke the internet? Just good prompt engineering. Which means you could possibly build the next viral AI tool with what you learn in this episode.
The most exciting moment for us?
When Logan admitted he’s using a product he knows will be obsolete soon because the future version is so much better. We can’t wait to try out the new version of AI Studio when it’s ready.
Stay curious!
Editor’s note: This content originally ran in our sister publication, The Neuron. To read more from The Neuron, sign up for its newsletter here.
The post Google’s Head of AI Studio Builds Apps in Under a Minute – Here’s How appeared first on eWEEK.
No Responses