Google AI Studio: Build and Test Gemini AI Models
Most developers spend hours setting up API clients, managing credentials, and writing integration code before testing their first model interaction. Google AI Studio skips this setup by providing a browser workspace where anyone can prototype with Gemini models, test prompts, and build applications through text descriptions.
This tutorial shows how to use Google AI Studio’s main features: testing prompts through different interfaces, building web applications from natural language descriptions, adjusting model parameters, and deploying to Google Cloud Run.
What is Google AI Studio?
Google AI Studio is a free, browser-based platform for testing, building, and deploying applications with Gemini models.
The platform has two main modes
- The first lets developers test prompts and experiment with model responses through different interfaces
- The second generates complete web applications from text descriptions, developers type what they want to build, and the platform writes the code.
Access requires only a Google account. The free tier includes 5-15 requests per minute, depending on which model is selected, with quotas that reset daily. No credit card needed.
Google launched Build mode in October 2024, adding the ability to create applications through conversation. The platform also added Annotation Mode, which lets developers point at parts of the interface and describe changes instead of editing code directly.
These capabilities come together through specific features that handle different development tasks.
Key features of Google AI Studio
Google AI Studio consolidates everything into two main areas: the Playground for testing and experimenting, and Build mode for creating actual applications.
Unified Playground
The Playground works as a single chat interface where all model testing happens. No more picking between different prompt types or switching windows. Just open it and start chatting.
Tabs across the top switch between capabilities: Gemini for text, Images for visuals, Video for clips, and Audio for voice. Click any tab to change what the model generates. System instructions in the right panel define behavior once and apply to every message in the conversation.
The interface remembers conversation history automatically. Ask a follow-up question, and the model references what was said earlier without needing to repeat context.
Build mode
Build mode generates complete applications from descriptions. The interface splits into three panels: chat for requests, code view for generated files, and live preview for testing.
Describe an app like “Create a recipe search tool with filters for dietary restrictions” and watch the platform generate React components, API calls, and styling. Make changes by continuing the conversation: “add a dark mode toggle” or “show cooking time for each recipe.”
Annotation Mode skips typing altogether. Click any UI element in the preview, describe the change, and the code updates to match. This works for layout adjustments, color changes, or adding new components.
Model selection
The Gemini tab offers two models. Flash processes requests quickly, making it ideal for simple tasks and high-volume applications. Pro provides deeper analysis and handles multi-step reasoning for complex questions.
Switch to the Images tab to access Nano Banana Pro. This model generates images and handles visual tasks like creating product mockups or marketing graphics.
Parameters in the right panel control how models respond. Temperature adjusts creativity: set it to 0 for factual consistency or 2 for varied, creative outputs. Output length determines how long responses can be. Top P stays at default (0.95) for most tasks.
AI Chips
AI Chips connect Google services directly into Build mode applications. Select Nano Banana to add image generation, Google Search to ground responses in current data, or Maps to include location features. These integrate without separate API configuration or authentication setup.
Code export and deployment
The “Download App” button converts Playground experiments into Python, JavaScript, or other languages. Generated code includes API calls and parameters ready to paste into projects.
Build mode apps deploy to Cloud Run with one click. The platform handles hosting, creates a live URL, and manages scaling automatically. No Docker containers or server configuration required.
With the platform’s main capabilities clear, setting it up takes just a few minutes.
Setting up Google AI Studio and getting your API key
Google AI Studio works entirely in the browser with just a Google account. No downloads, payment info, or setup required.
Step 1: Sign in to Google AI Studio
Go to Google AI Studio website and sign in with a Google account. New users see a Terms of Service prompt for Generative AI. Accept this to access the platform.
The interface shows a left sidebar with Home, Playground, Build, Dashboard, and Documentation. The homepage displays platform capabilities, recent projects, and quick access cards.
Step 2: Test the Playground
Click “Playground” in the sidebar to open the chat interface. This is where all testing happens.
Type a simple prompt like “Explain how photosynthesis works in two sentences” at the bottom. Hit Run (or press Cmd/Ctrl + Enter).
The model processes the request and shows a response within seconds. If it works, the platform is ready to use. No API key needed for testing in the browser.
Step 3: Get an API key (optional)
An API key is only needed for using Gemini models in custom code or applications. Skip this step for now if you’re just exploring the Playground.
Look at the bottom of the left sidebar for “Get API key.” Click it to open the API Keys management page.
Click “Create API key” to generate a new key. The platform automatically creates a default Google Cloud project if this is the first key. A dialog box appears showing the new key.
Copy the key immediately. Store it in a password manager or environment variable. This key authenticates all API requests, so treat it like a password. Don’t commit it to GitHub or paste it in public forums.
The free tier activates automatically with the first API key. Limits vary by model but typically range from 5-15 requests per minute with daily caps. Check usage anytime from Dashboard > Usage and Billing.
Now let’s explore what this Playground can actually do.
Creating and testing prompts in Google AI Studio
The Playground is where everything starts. This single interface handles all prompt testing, from quick questions to complex multi-turn conversations. Let’s walk through how to use it effectively.
Start with basic prompts in Google AI Studio
Open Playground from the sidebar. The interface loads with a chat window in the center and the Gemini tab selected at the top.
Look at the input box at the bottom. Type a straightforward prompt to see how the model responds:
Write a product description for wireless noise-canceling headphones. Include key features and benefits in 3-4 sentences.
Hit Run (or Cmd/Ctrl + Enter). The model processes this and returns a response within a few seconds.
The response shows up in the chat window. Notice how the model structures the answer and what details it includes. This gives a baseline for how it interprets instructions.
Now try being more specific with the same request:
Write three product descriptions for wireless noise-canceling headphones:1. Professional tone for business travelers2. Casual tone for students3. Technical tone for audiophilesKeep each under 50 words.
Run this and compare the three versions. The model adapts tone and focus based on the audience specified.
Set system instructions for consistent behavior
System instructions tell the model how to behave throughout the entire conversation. This saves repeating the same guidelines in every prompt.
Look for the settings icon in the right panel. Click it to expand the system instructions field.
Type instructions that define the model’s behavior:
Keep all responses under 100 words. Use a friendly, conversational tone. Avoid technical jargon unless specifically asked. When listing items, use numbered lists.
These instructions now apply to every message. Test this by asking several different questions:
What is machine learning?How do electric cars work?Explain photosynthesis.
Notice how each response stays under 100 words, maintains a friendly tone, and follows the formatting rules. Change the system instructions anytime to adjust behavior for different tasks.
Switch models in Google AI Studio for different tasks
Different models excel at different things. The model selector at the top right shows which one is currently active.
Click the dropdown to see available options under the Gemini tab:
- Gemini 3 Flash: Fast responses, good for simple queries
- Gemini 3 Pro: Better at complex reasoning and nuanced questions
Try the same prompt with different models. Start with Flash selected:
Analyse the pros and cons of remote work from both employee and employer perspectives. Consider productivity, costs, culture, and work-life balance.
Note the response structure and depth. Now switch to Gemini Pro and run the exact same prompt. The Pro version typically provides more nuanced analysis and considers additional factors.
For image generation, click the Images tab at the top. The model selector changes to show image-specific options like Nano Banana Pro and Imagen 4.
Type an image prompt:
Create a modern product photo of wireless headphones on a clean white surface with soft lighting. Professional studio quality.
Select Nano Banana Pro and run it. The model generates an image based on the description.
Adjust parameters in Google AI Studio to control outputs
Parameters in the right panel control how the model generates responses. Temperature is the most important one to understand.
The temperature ranges from 0 to 2. Lower values produce factual, consistent responses. Higher values create more varied, creative outputs.
Test this with the same prompt at different temperatures. Set temperature to 0.2 and run:
Explain what APIs are and how they work.
The response will be straightforward and factual.
Now change temperature to 1.5 and run the same prompt again. The explanation becomes more creative, possibly using analogies.
The difference shows when to use each setting. Low temperature works for technical documentation or data extraction. High temperature fits brainstorming or creative writing.
With these prompt testing techniques down, Build mode takes things further by turning prompts into complete applications.
Building your first app with Google AI Studio’s Build mode
Build mode generates complete web applications from descriptions. Instead of just testing prompts, this creates actual deployable apps with a working interface and backend logic.
Step 1: Access Build mode in Google AI Studio
Click “Build” in the left sidebar. The interface changes to show three panels: chat on the left for describing the app, code in the middle, and a live preview on the right.
This is where apps get built through conversation. No need to write React components or configure APIs manually.
Step 2: Describe the app to build
Type what the app should do in the chat input. Be specific about functionality and features.
Try this example:
Create a simple recipe search app. Users can enter ingredients they have, and the app suggests recipes using those ingredients. Show recipe name, cooking time, and difficulty level for each result.
Hit Run. Watch the code panel populate with files as the model generates the application structure.
Step 3: Test and refine in Google AI Studio
The preview panel shows the app running. Try typing ingredients like “chicken, rice, garlic” to see recipe suggestions appear.
Notice something to improve? Continue the conversation:
Add a filter for dietary restrictions like vegetarian, vegan, and gluten-free.
The model updates the code, and the preview refreshes automatically. Keep refining through chat until the app works as intended.
Step 4: Deploy or export the code
Once the app is ready, two options are available. Click “Deploy App” for instant hosting with a live URL. Or click “Download App” to download the project and continue development locally.
Cloud Run deployment handles everything. The app goes live with automatic scaling and a shareable link within minutes.
Build mode turns ideas into working prototypes faster than traditional development. The generated code is clean, ready for production use or further customization.
Google AI Studio vs ChatGPT vs Claude
Google AI Studio is often compared with tools like ChatGPT and Claude. While all three work with large language models, they are built for different types of tasks.
The table below shows how the platforms differ in practice:
| Feature | Google AI Studio | ChatGPT | Claude |
|---|---|---|---|
| Primary focus | App development and prototyping | General-purpose assistant | Long-form writing and coding |
| Build apps | Yes, with Build mode | No | Limited through Artifacts |
| Free tier | Yes, 5 to 15 RPM | Yes, limited GPT-4 access | Yes, limited usage |
| Image generation | Yes, Nano Banana Pro | Yes, DALL-E | No |
| Video generation | Yes, Veo 3.1 | No | No |
| Code export | Yes, Python, JavaScript, more | Copy and paste only | Copy and paste only |
| Cloud deployment | One-click to Cloud Run | No | No |
| Context window | Up to 1M tokens | 128K tokens | 200K tokens |
| Best for | Building AI apps | Daily questions and tasks | Long documents and analysis |
| Google integration | Native Search, Maps, Drive | None | None |
| Voice interaction | Yes, Live mode | Yes, Advanced Voice | Yes, beta |
ChatGPT focuses on conversation, quick questions, and image generation. Claude is strongest when working with long documents or extended reasoning. Both are useful assistants, but neither is designed around building or deploying applications.
Google AI Studio is different. The platform is built around turning prompts into working software. Build mode generates full applications, exports production-ready code, and deploys directly to Cloud Run. Tight integration with Google services also makes features like search grounding, maps, and image generation available without extra setup.
For prompt testing or general chat, all three tools work well. For moving from an idea to a running application, Google AI Studio is designed specifically for that workflow.
Conclusion
Google AI Studio keeps experimentation, development, and deployment in one browser interface. No need for separate tools for prompt testing, code, or hosting.
This tutorial covered the practical steps needed to go from idea to running application:
- The structure of Google AI Studio, available Gemini models, and free tier usage limits
- Using the Playground to test prompts, run multi-turn conversations, and compare outputs
- Setting system instructions and adjusting temperature, output length, and Top P parameters
- Switching between Gemini Flash, Gemini Pro, and media models for text, image, video, and audio generation
- Building complete web applications through Build mode and editing layouts with Annotation Mode
- Exporting code as Python or JavaScript and deploying apps to Cloud Run with one click
To continue learning how generative AI fits into the broader Google Cloud ecosystem, take Codecademy’s free course on Generative AI on GCP: Harnessing Generative AI with Vertex AI to learn about Vertex AI, model APIs, AI Studio, and responsible AI development in more depth.
Frequently asked questions
1. Is Google AI Studio the same as Gemini?
No. Gemini is the model family, while Google AI Studio is the web platform used to test, build, and deploy applications powered by Gemini models.
2. Is Google AI Studio free to use?
Yes. Google AI Studio is free with a Google account and includes a free tier with usage limits that typically range from 5 to 15 requests per minute, depending on the model.
3. What models are in Google AI Studio?
Google AI Studio includes Gemini Flash and Gemini Pro for text, along with models for image generation, video generation, and audio, depending on the active Playground tab.
4. Who should use Google AI Studio?
Google AI Studio is best suited for developers, product teams, and technical learners who want to prototype, test, and deploy AI applications using Gemini models without complex setup.
'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'
Meet the full teamRelated articles
- Article
How to use Gemini 2.5 Flash Image (Nano Banana)
Learn to use Gemini 2.5 Flash Image (Nano Banana) for AI photo editing, with real use cases and features. - Article
How to Use Lovable AI With Demo Project
Master Lovable AI with this hands-on tutorial. Build a PDF chat assistant using vibe coding and Gemini API in 4 prompts - Article
AI vs Generative AI: Understanding the Difference
Learn what is AI vs generative AI difference. Explore how each works, their key differences, and real-world use cases.
Learn more on Codecademy
- Explore Generative AI Studio on GCP. Learn language model training, tuning, performance evaluation, deployment, and speech-to-text conversion.
- Intermediate.2 hours
- Learn the basics of generative AI and best prompt engineering practices when using AI chatbots like ChatGPT to create new content.
- Includes 6 Courses
- With Certificate
- Beginner Friendly.6 hours
- Leverage Vertex AI on GCP for generative AI projects. Explore genAI models, AI Studio, Vertex API, model garden, custom model creation, and ethical AI development.
- Intermediate.1 hour