Android with AI Solution
Supercharge your Android apps with Generative AI
Learn how to build Android apps faster with Google
In this learning pathway, you'll discover how to build more engaging Android applications with less effort, using Google technologies. Over the following sections, you'll be building and enhancing a hypothetical meal preparation app — a stand-in for the type of app that you might be working on today as an Android developer.
You'll learn how to use Gemini in Android Studio to learn and develop faster, use Firebase to build your app's storage layers and sign-in, use Gemini to build state-of-the-art generative AI features into your application, and use tools like Firebase Remote Config, Google Analytics, and Crashlytics to support your app in production.
You'll learn how to use Gemini in Android Studio to learn and develop faster, use Firebase to build your app's storage layers and sign-in, use Gemini to build state-of-the-art generative AI features into your application, and use tools like Firebase Remote Config, Google Analytics, and Crashlytics to support your app in production.
Supercharge Your Android Development with Gemini in Android Studio
It's easier than ever to build Android applications with the help of Gemini in Android Studio, your AI-powered coding companion.
Integrating AI directly into the IDE you use daily, Gemini in Android Studio is designed to make building high-quality Android apps faster and easier by assisting you throughout your entire software development lifecycle. This means you can learn new concepts faster, prototype with ease, and spend more time focusing on the parts of your application that matter.
As you begin on your Android app, see how Gemini in Android Studio can supercharge your development journey.
Integrating AI directly into the IDE you use daily, Gemini in Android Studio is designed to make building high-quality Android apps faster and easier by assisting you throughout your entire software development lifecycle. This means you can learn new concepts faster, prototype with ease, and spend more time focusing on the parts of your application that matter.
As you begin on your Android app, see how Gemini in Android Studio can supercharge your development journey.
Learning Android made easier with AI assistance
If you're new to Android or specific Android development areas, Gemini in Android Studio can be an invaluable learning tool.
- Get instant answers to your questions: You can ask Gemini questions about fundamental Android concepts, specific APIs, or best practices directly within Android Studio's chat window. For example, you can ask "What is dark theme?" or "What's the best way to get location on Android?".
- Receive code examples and guidance: Gemini can generate code snippets and provide guidance on implementing various features, such as adding camera support or creating a Room database. You can even ask for code in Kotlin or specifically for Jetpack Compose.
- Understand errors and find solutions: When you encounter build or sync errors, you can ask Gemini for an explanation and suggestions on how to resolve them. Gemini can also help analyze crash reports from App Quality Insights, providing summaries and recommending next steps.

Enhanced benefits for teams with Gemini in Studio for businesses
The individual version of Gemini in Android Studio is no-cost while in preview.
However, for development in large team environments with more demanding privacy and management requirements, Gemini in Studio for businesses offers additional valuable benefits, including enhanced privacy, security, code customization features — and is available for use with your Google Cloud credits.
Together with Gemini Code Assist, these tools empower teams to leverage the power of AI with confidence, addressing crucial privacy, security, and management needs.
However, for development in large team environments with more demanding privacy and management requirements, Gemini in Studio for businesses offers additional valuable benefits, including enhanced privacy, security, code customization features — and is available for use with your Google Cloud credits.
Together with Gemini Code Assist, these tools empower teams to leverage the power of AI with confidence, addressing crucial privacy, security, and management needs.
Firebase building blocks for your app
Common features in application development, such as cloud storage, user authentication, and crash reporting, are necessary components as you develop and operate any app.
Firebase simplifies the process of Android app development by providing these essential building blocks, eliminating the need for you to implement your own backend.
Firebase simplifies the process of Android app development by providing these essential building blocks, eliminating the need for you to implement your own backend.
Cloud Firestore
For example, if you're building a recipe preparation app, you need to persist recipes, meal plans, and ingredient lists beyond the device (in case the user switches phones for instance). You can persist this data in Cloud Firestore.
Cloud Firestore is a scalable NoSQL cloud database offered by Firebase and Google Cloud. It enables real-time data synchronization across client apps through real-time listeners, and it has offline support for mobile and web, ensuring responsive app performance regardless of network availability. It seamlessly integrates with other Firebase and Google Cloud products, including Cloud Functions.
Cloud Firestore is a scalable NoSQL cloud database offered by Firebase and Google Cloud. It enables real-time data synchronization across client apps through real-time listeners, and it has offline support for mobile and web, ensuring responsive app performance regardless of network availability. It seamlessly integrates with other Firebase and Google Cloud products, including Cloud Functions.

Authentication
User authentication is essential to allow users who switch devices to access their data – and make sure others cannot access their data!
Firebase Authentication is a powerful tool that simplifies the process of adding user authentication to Android apps. It provides backend services, and an SDK with ready-made UI libraries that support various authentication methods, including email/password login, phone number authentication, and integration with popular federated identity providers like Google, Facebook, and Twitter.
Firebase Authentication is a powerful tool that simplifies the process of adding user authentication to Android apps. It provides backend services, and an SDK with ready-made UI libraries that support various authentication methods, including email/password login, phone number authentication, and integration with popular federated identity providers like Google, Facebook, and Twitter.
Crash reporting
Monitoring errors and crashes is essential to ensuring your apps are stable and successful - a crashing app will frustrate your users and get uninstalled!
Firebase Crashlytics is a real-time crash reporter that helps you track, prioritize, and fix stability issues that erode your app quality. It saves you troubleshooting time by intelligently grouping crashes and highlighting the circumstances that lead up to them.
Both Cloud Firestore and Firebase Authentication offer generous no-cost tiers; however, if your app requires more quota or advanced features from these services, you'll need to be on a paid plan. But don't worry – you can use your Cloud credits to cover those costs! And Crashlytics is free of charge no matter how much you use it.
To learn about the other solutions provided by Firebase, visit the Firebase website.
Firebase Crashlytics is a real-time crash reporter that helps you track, prioritize, and fix stability issues that erode your app quality. It saves you troubleshooting time by intelligently grouping crashes and highlighting the circumstances that lead up to them.
Both Cloud Firestore and Firebase Authentication offer generous no-cost tiers; however, if your app requires more quota or advanced features from these services, you'll need to be on a paid plan. But don't worry – you can use your Cloud credits to cover those costs! And Crashlytics is free of charge no matter how much you use it.
To learn about the other solutions provided by Firebase, visit the Firebase website.
scope.launch { val response = model.generateContent( "Create a shopping list with $cuisineStyle ingredients") }
In the case of an example recipe app, Gemini 2.0 Flash can create a shopping list of ingredients for cooking a meal in a specific cuisine style. You can even ask the model to generate a JSON string that can be easily parsed in the app for rendering into the UI. To generate the list, just call the function `generateContent()` with a text prompt.
Review the Android developer guide to learn more about it.
Review the Android developer guide to learn more about it.
Generative AI on Android
Integrating Generative AI within our meal preparation Android application can be achieved through various ways. Here is a quick overview of each options:
Gemini Nano on Android
Gemini Nano is the model of the Gemini family optimized to run on-device. It is directly integrated to the Android OS via AICore. You can use it to deliver generative AI experiences without the need for network connection or sending data to the cloud.
On-device AI is a great option for use-cases where low latency, low cost, and privacy safeguards are your primary concerns. For example, in a meal pre app, Gemini Nano could be used to suggest meal ideas based on different cuisines and the user's meal history.
You can learn more about Gemini Nano’s technical architecture in the Android documentation.
To experiment with Gemini Nano in your own application, review the Gemini Nano on-device with the experimental Google AI Edge SDK step below.
On-device AI is a great option for use-cases where low latency, low cost, and privacy safeguards are your primary concerns. For example, in a meal pre app, Gemini Nano could be used to suggest meal ideas based on different cuisines and the user's meal history.
You can learn more about Gemini Nano’s technical architecture in the Android documentation.
To experiment with Gemini Nano in your own application, review the Gemini Nano on-device with the experimental Google AI Edge SDK step below.

Imagen & Gemini Pro and Flash: Google GenAI cloud models
Generative AI models that are optimized to run on the cloud are generally more capable than on-device AI models.
As an Android developer, you can use Vertex AI in Firebase to quickly implement generative AI capabilities in your Android app using Gemini Pro and Flash models for text generation tasks and Imagen for image generation tasks.
As an Android developer, you can use Vertex AI in Firebase to quickly implement generative AI capabilities in your Android app using Gemini Pro and Flash models for text generation tasks and Imagen for image generation tasks.
Gemini Pro and Flash
The Gemini Pro and Flash family of AI models are multimodal and can handle a wide range of tasks. They take image, audio, and video input and generate text output that can be formatted as JSON, XML, and CSV. And the newest Gemini models can even generate multimodal output - like audio and images!
For example, in a meal prep app, you can use a Gemini model to create a shopping list with ingredients for a specific type of cuisine.
And you can use your Google Cloud credits to cover the costs of these calls to the Gemini models!
To learn how to use cloud-hosted Gemini models in your app, review the Gemini via Vertex AI in Firebase step below.
For example, in a meal prep app, you can use a Gemini model to create a shopping list with ingredients for a specific type of cuisine.
And you can use your Google Cloud credits to cover the costs of these calls to the Gemini models!
To learn how to use cloud-hosted Gemini models in your app, review the Gemini via Vertex AI in Firebase step below.

Imagen 3
Imagen 3 is Google's latest image generation model. And you can access it via Vertex AI in Firebase, making adding image generation capabilities to your Android app quick and seamless.
For example, in a meal prep app, you can use the Imagen 3 model to generate recipe illustrations.
The cost can be covered by your Google Cloud Credit.
To learn how to use Imagen 3 in your app, read the Imagen 3 for image generation step below.
For example, in a meal prep app, you can use the Imagen 3 model to generate recipe illustrations.
The cost can be covered by your Google Cloud Credit.
To learn how to use Imagen 3 in your app, read the Imagen 3 for image generation step below.
Backend integration
You can also add generative AI capabilities through backend integration:
- Genkit is an open-source framework that simplifies the development, deployment, and monitoring of AI-powered applications.
- For more advanced MLOps needs, Google Cloud’s Vertex AI offered fully managed services as well as a rich offering of models via Vertex AI model Garden. You can use your Google Cloud credits to cover any costs for these services, too.
On-device custom solutions
If you want to run AI inference on-device beyond Gemini Nano, you can also experiment with LiteRT and MediaPipe:
To learn more about the Android GenAI offerings, visit the AI section of the Android documentation.
- LiteRT (formerly TFLite) is Google's high-performance runtime for on-device AI, designed to efficiently execute machine learning models directly on devices.
- MediaPipe is an open-source framework that enables developers to build machine learning pipelines for processing multimedia data, such as video and audio, in real-time.
To learn more about the Android GenAI offerings, visit the AI section of the Android documentation.
Gemini Nano on-device experimental access
The Google AI Edge SDK enables Android app developers to integrate and experiment with Gemini Nano's on-device GenAI capabilities to enhance their applications.
Here's how to get started:
- Join the aicore-experimental Google group
- Opt in to the Android AICore testing program
After you complete these steps, the AICore app name on the Play store (under manage apps and device) should change from "Android AICore" to "Android AICore (Beta)".

- Follow these steps to make sure that the APKs and binaries are properly downloaded on your device.
- Then update the Gradle configuration of your app by adding the following dependency:
implementation("com.google.ai.edge.aicore:aicore:0.0.1-exp01")
And make sure that you set the minimum SDK target to 31.
implementation("com.google.ai.edge.aicore:aicore:0.0.1-exp01")
Next, you can configure the model to control its responses. This involves providing the context and optionally setting the following parameters:
- Temperature: controls the level of randomness. Higher values will result in greater diversity in the output.
- Top K: specifies the number of highest-ranking tokens to be considered for output generation.
- Candidate Count: sets the maximum number of responses to be returned.
- Max Output Tokens: sets the maximum length of the response.
val generationConfig = generationConfig { context = ApplicationProvider.getApplicationContext() temperature = 0.2f topK = 16 maxOutputTokens = 256 }
Create an optional
downloadCallback
function. This callback function is used for model downloading. It also returns messages that can be used for debugging purposes.
Generate the `GenerativeModel` object using the generation and optional download configs that you previously created.
val downloadConfig = DownloadConfig(downloadCallback) val generativeModel = GenerativeModel( generationConfig = generationConfig, downloadConfig = downloadConfig // optional )
Finally, launch the inference by passing your prompt to the model. Ensure that
In the context of an example meal prep application, Gemini Nano can provide meal inspiration by suggesting various cuisine types and meals that are different from the meal history.
GenerativeModel.generateContent()
is within the appropriate coroutine scope, as it is a suspend function. In the context of an example meal prep application, Gemini Nano can provide meal inspiration by suggesting various cuisine types and meals that are different from the meal history.
scope.launch { val input = "Suggest different types of cuisines and easy to cook dishes that are not $recentMealList" val response = generativeModel.generateContent(input) print(response.text) }
The Gemini Nano model has a maximum input token limit of 12,000. To learn more about Gemini Nano experimental access, go to the Gemini Nano section of the Android documentation.
Gemini via Vertex AI in Firebase
Leveraging Vertex AI in Firebase allows you to build genAI-powered features using Gemini Cloud models all with the seamless deployment and management of the Firebase ecosystem.
dependencies { ... // Import the BoM for the Firebase platform implementation(platform("com.google.firebase:firebase-bom:" )) // Add the dependency for the Vertex AI in Firebase library // When using the BoM, you don't specify versions in Firebase // library dependencies implementation("com.google.firebase:firebase-vertexai") }
Get started by experimenting with prompts in Vertex AI Studio. It's an interactive interface for prompt design and prototyping. You can upload files to test prompts with text and images and save a prompt to revisit it later.
When you're ready to call the Gemini API from your app, set up Firebase and the SDK by following the instructions in the Vertex AI in Firebase getting started guide.
Then, add the Gradle dependency to your project:
When you're ready to call the Gemini API from your app, set up Firebase and the SDK by following the instructions in the Vertex AI in Firebase getting started guide.
Then, add the Gradle dependency to your project:
val generativeModel = Firebase.vertexAI .generativeModel( "gemini-2.0-flash", generationConfig = generationConfig { responseMimeType = "application/json" responseSchema = jsonSchema } )
You can now call the Gemini API from your Kotlin code. First initialize the Vertex AI service and create a `GenerativeModel` instance:
scope.launch { val response = model.generateContent(" Create a shopping list with $cuisineStyle ingredients") }
In the case of an example recipe app, Gemini 2.0 Flash can create a shopping list of ingredients for cooking a meal in a specific cuisine style. You can even ask the model to generate a JSON string that can be easily parsed in the app for rendering into the UI. To generate the list, just call the function `generateContent()` with a text prompt.
Review the Android developer guide to learn more about it.
Review the Android developer guide to learn more about it.
Imagen 3 for image generation
Imagen 3 is accessible through Vertex AI in Firebase so that you can seamlessly integrate image generation into your Android apps. As Google's most advanced image generation model, Imagen 3 produces high-quality images with remarkable detail, minimal artifacts, and realistic lighting effects, setting a new standard in image generation.
For example, Imagen 3 could allow your users to generate their own profile avatars or create assets to illustrate existing screen flows. For an example meal prep app, you can use Imagen 3 to generate images for the recipe screen.
Image generated by Imagen 3 with the prompt: A cartoon style illustration of a top overview of a kitchen countertop with beautiful ingredients for a Mediterranean meal.
For example, Imagen 3 could allow your users to generate their own profile avatars or create assets to illustrate existing screen flows. For an example meal prep app, you can use Imagen 3 to generate images for the recipe screen.
Image generated by Imagen 3 with the prompt: A cartoon style illustration of a top overview of a kitchen countertop with beautiful ingredients for a Mediterranean meal.

dependencies { implementation(platform("com.google.firebase:firebase-bom:33.10.0")) implementation("com.google.firebase:firebase-vertexai") }
The integration of Imagen 3 is similar to accessing a Gemini model via Vertex AI in Firebase.
Start by adding the Gradle dependencies to your Android project:
Start by adding the Gradle dependencies to your Android project:
val imageModel = Firebase.vertexAI.imagenModel( modelName = "imagen-3.0-generate-001", generationConfig = ImagenGenerationConfig( imageFormat = ImagenImageFormat.jpeg(compresssionQuality = 75), addWatermark = true, numberOfImages = 1, aspectRatio = ImagenAspectRatio.SQUARE_1x1 )
Then, in your Kotlin code, create an `ImageModel` instance by passing the model name and optionally, a model configuration:
val imageResponse = imageModel.generateImages( prompt = "A cartoon style illustration of a top overview of a kitchen countertop with beautiful ingredients for a $cuisineStyle meal." )
Finally generate the image by calling `generateImages()` with a text prompt:
val image = imageResponse.images.first() val uiImage = image.asBitmap()
Retrieve the generated image from the `imageResponse` and display it as a bitmap:
You can read more about using Imagen 3 in the Android developers blog and in the Android Developer documentation.
Get ready for production with Firebase
Once you've implemented your genAI features in your app, here are some critical next steps before you deploy your app into production:
- Implement Firebase App Check with Play Integrity to prevent API abuse.
- Use Firebase Remote Config for server-controlled configuration to dynamically update the AI model and version.
- Build feedback mechanisms with Google Analytics to evaluate the impact and gather user input on AI responses.