Skip to main content
Version: 0.0.2

AI

AudioMuse

I only really became aware of this with navidrome's 0.60 release where it enabled plugins, creating smart playlists etc. It was something I've been looking for awhile to analyze my music media and give me cool recommendations using AI. This did prompt me to setup ollama since using gemini got me rate limited pretty quickly. But so far the ui is really easy to execute tasks and using the gpu image with nvidia, very fast.

Ollama

This turned out to be a really cool setup and I'm still messing around with this to know if I want to keep it. I wanted a way to get better usage out of my nvidia 3060 since it just kind of sits there unless it's used for the Windows KVM I have set up. So I decided to give setting up ollama a spin and well it turns out to be really nice to setup with other tools. For now I've used gemini's recommendation in selecting the particular models to download and I ended up going with this:

- llama3.1:latest -> for general information
- qwen2.5-coder:14b -> for coding
- deepseek-r1:14b -> for general information also
- gemma3:12b -> for general information also (testing this out as my default)

Setting up my nvidia gpu for usage in containers was a whole hassle I'll document at the docker page, but so far the models are pretty fast and seem to be pretty decent. One thing that has been annoying has been the fact that the models are trained on data from 2024 or earlier.

Open-WebUI

With getting ollama setup it wouldn't be super easy and configurable without open-webui. It's essentially a frontend for ollama, interacting with the models and allowing me to configure the entire experience, it's all setup as if it's chat-gpt or gemini. One thing that has been amazing was how documented and insane the entire experience is with user authentication with headers, enabling the ability for models to execute web searches, and making small tweaks that have the experience so clean.

It was possible to have ollama in one container with open-webui but it felt better to keep it separately. Downloading models for ollama was really simple and was just searching for the tag and hitting fetch. In order to make the experience better, I went into the admin panel and modified the downloaded models and added a system prompt that I found from gemini to help make the experience nicer. Also I added searxng as a web search tool in order to facilate an even better experience.

note

Ensure to set up searxng with a limited result set length

searxng

In order to enhance the model experience with ollama I wanted to set up a search service with open-webui and decided to set this up since it's free. It was surprisinly easy to setup with just a quick setting tweak or two it was good. I had to add - json format to the search settings to searxng and it was working perfectly good with open-webui.