ad
ad
Topview AI logo

Turn Your Raspberry Pi into an Off-Grid AI Hub: Ollama, Open WebUI, & IIAB Menu Integration

Howto & Style


Introduction

Creating an off-grid AI hub using a Raspberry Pi is now more accessible than ever with newly integrated software that allows users to run large language models locally. This article discusses the step-by-step process of installing Ollama (AMA), setting up Open WebUI, and integrating with the Internet in a Box (IIAB) menu to create a unique AI setup.

Equipment and Initial Setup

For this project, a Raspberry Pi 4 with 4GB of RAM was used alongside a USB 1TB hard drive, which was connected via a wired internet connection. The Raspberry Pi OS 64-bit Lite was installed, along with a full installation of Internet in a Box, excluding the Internet Archive module. The exclusion of this module is crucial, as we will use its menu entry for our AMA integration.

Connecting to Your Raspberry Pi

To begin, utilize PuTTY to connect to your Raspberry Pi and authenticate your user account. First, ensure that curl is installed by running the appropriate command.

sudo apt-get install curl

Next, enter the command to install Ollama. Depending on your internet connection, this process may take some time. Upon completion, you might see a warning indicating no GPU is detected—expected behavior since we are using a Raspberry Pi.

Installing Docker

Following the installation of Ollama, Docker is required to set up Open WebUI easily. Install Docker with the following command:

curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh

After installing Docker, add your user profile to the Docker User Group with this command:

sudo usermod -aG docker $(USER)

Remember to log out and back in for the changes to take effect. You can verify your Docker installation by using:

docker --version

Configuring Open WebUI

Edit the AMA service file for Open WebUI to ensure communication between AMA and Open WebUI:

sudo nano /etc/systemd/system/ama.service

Add the appropriate configuration here, save, and exit. Restart the services with these commands:

sudo systemctl daemon-reload
sudo systemctl restart ama.service

Next, create a directory for the Open WebUI compose file and enter the following commands to set it up:

mkdir ~/openwebui
cd ~/openwebui
nano docker-compose.yml

In the docker-compose.yml file, input specific configurations that work well with Raspberry Pi 4.

Starting the Docker Container

Once your Docker configuration file is created, execute the following command to start the container:

docker-compose up -d

After this command runs successfully, check your Raspberry Pi's IP address followed by the created port to access Open WebUI in a web browser (for example, http://192.168.1.2:4244). Create an administrator profile upon initial access.

Downloading Language Models

To activate the language model capabilities of Ollama, navigate to the library of available models. Look for models around the 1B size, as the Raspberry Pi's RAM limits its capacity. Copy the model path and paste it into the model settings in the Open WebUI to start downloading.

Using Retrieval Augmented Generation (RAG)

In the admin panel, under the documents section, you can add relevant documents for RAG functionality. Transfer files onto a FAT32 formatted USB drive, and mount it on your Raspberry Pi to copy the documents into the appropriate directory. Scanning the documents in the AMA admin panel allows the models to utilize this data effectively.

Custom Model Creation

An impressive feature of Ollama is the ability to create custom models based on your data sets. From the homepage, navigating to the workspace allows you to create a new model by selecting to incorporate documents or defining prompts.

Integrating with IIAB Menu

To enhance usability, the custom AI application can replace existing Internet Archive entries in the IIAB. Log into the IIAB admin portal, edit the menu, and drag the Internet Archive item to create a new custom icon. Update the item to reflect the AI application changes.

Conclusion

Setting up a Raspberry Pi as an Off-Grid AI hub with Ollama, Open WebUI, and IIAB menu integrations opens up numerous possibilities for personal projects. This technology can significantly enhance an off-grid setup, facilitating unique functionalities.

We encourage you to explore and leverage this technology in your projects. Happy experimenting!


Keyword

Keywords: Raspberry Pi, Off-Grid AI, Ollama, Open WebUI, Internet in a Box, Docker, Retrieval Augmented Generation, Language Models, Custom Models


FAQ

What is Ollama?
Ollama (AMA) is a prepackaged application that allows users to run large language models locally, ensuring data privacy.

What hardware is required for this setup?
A Raspberry Pi 4 (4GB RAM), USB hard drive, and a wired internet connection is recommended for optimal performance.

Can I use larger models?
While the Raspberry Pi can support up to 1B models effectively, experimenting with slightly larger models can be done, especially with the 8GB variant.

How does Retrieval Augmented Generation (RAG) work?
RAG allows you to add documents that the AI can use as sources when generating responses, providing more accurate and relevant information.

What if I want to revert to using the Internet Archive module?
You can easily do that by changing the port configuration in your setup. Just make sure to access the desired module using the appropriate IP address and port.

ad

Share

linkedin icon
twitter icon
facebook icon
email icon
ad