Discover how you can quickly set up a personal AI assistant on a home network using Raspberry Pi.
The past few years have seen a rapid rise in the fields of artificial intelligence and large language models (LLMs). LLMs have been instrumental in the proliferation of ‘modern AI’ applications and are significantly better than the natural language processing (NLP)-based home assistants like Amazon Alexa when it comes to text generation or responding to human queries.
LLMs are mostly used as web applications like most other software as a service (SaaS) offerings. And they come with the same risks as that of a SaaS application, the biggest ones being data privacy and sovereignty. When using SaaS applications, we are usually not in control of the data centres where our data is being processed, as well as how it is being saved and whether it may be breached. There has been a history of SaaS applications getting breached and our data ending up in the darknet pastebins, waiting to be sold to the highest bidder. And the same is true of LLMs.
For example, the recent DeepSeek breach may have leaked over a million sensitive records. Similarly, there was a huge OpenAI breach in July 2024. In March 2024, there was a data breach in a Google Gemini system where malicious actors allegedly leaked personal information related to approximately 5.7 million Gemini users. Even Anthropic was a victim of a third-party data breach in January 2024. This shows LLM systems have major vulnerabilities when it comes to protecting user data. Further, when we use LLMs we usually share personal information with it. For example, we may share reports of a recent medical examination, querying the LLM for the causes or probable treatments. Or we may share our bank passbooks and salary slips and query LLMs to help with financial management.

As you can see, the data we share with the LLM is usually extremely sensitive, and it is risky to share this data because LLMs are prone to breaches. Hence, it is very important to have personal and private deployments of LLMs. Relying on SaaS services that are sensitive is not recommended, especially when the probability of a breach is high.
Thankfully, you can now build personal LLM systems attached to the home network for daily usage instead of relying entirely on SaaS applications. With the advancement in modern technology, it is now possible to run quantized models on low-end hardware. And to set up one on the local network is incredibly easy. All we need is a single-board computer like Raspberry Pi with around 8GB of RAM. Open source projects like Ollama make it possible to download the model files and host them locally.

Raspberry Pi runs a custom Debian-based operating system, which can be installed on an SD card with the Raspberry Pi Imager application. To set it up from scratch, insert a blank microSD card of at least 16GB to 32GB capacity on the computer, and open the Raspberry Pi Imager application. Then select the device and opt for ‘Raspberry Pi OS Lite’. This OS does not have overheads for GUI and is faster than the standard Raspberry Pi OS.
In the customisation option, select the following:
- Enable the ‘Set hostname’ field. The default value is ‘raspberrypi.local’ and need not be changed.
- Enable SSH with password authentication.
- Set username and password. The default username is ‘pi’, while the default password is ‘raspberry’. It is recommended to not use the default password.
- Set up the Wi-Fi credentials if you are using home Wi-Fi. Otherwise, you can simply connect the device to the router via an Ethernet cable.
You can now start flashing. When complete, the SD card can be ejected and put in the Raspberry Pi and powered on. To connect it via Ethernet, you may plug it into the LAN port of the router.
Connect the PC to the home Wi-Fi and SSH to the Raspberry Pi. To do this you may open the command prompt and enter ‘ssh [email protected]’. (In a broader sense, ‘ssh <username>@<hostname>’.) It will prompt you for the password.
To install Ollama and set it up as home assistant, you may run the following command:
curl https://github.com/AdityaMitra5102/LocalLLM/raw/refs/heads/main/oneclickinstall.sh | sh
This script automatically installs Ollama and a web UI for the same. It adds environment variables to allow the devices on your network to access it. It also automatically downloads the open source models DeepSeek R1, Llama 3, and Gemma, and runs them privately so that there is no risk of your data leaving your home network.

You can access it at http://raspberrypi.local (in a broader sense http://<hostname>) from any device connected to your home network.
Locally hosting an LLM comes with its pros and cons. The pros include privacy and data sovereignty, as it gives users complete autonomy over their data and how it is processed. Lower processing power is a disadvantage, and may result in slower outputs as well as an inability to run bigger models. However, as the pros outweigh the cons, LLMs can be locally hosted for most purposes.