profile

Tech Guidance for Non-Technical Founders

SundAI: Access AI Chat Locally


Did you know you can run certain Large Language Models (LLMs) on your own hardware?

The two main benefits of this are:

  1. The data you share in your AI chat sessions are private
  2. You can give the LLM access to your private documents and they won't leave your network

It's quite simple to set up:

  1. Download Ollama
  2. Install it and then choose a model to download. Llama3 is a good choice to start with.
  3. Run the model and start chatting with it.

The Ollama docs are easy to follow so no need to repeat them here.

Unless you have access to a very expensive GPU, you'll notice it's slower than ChatGPT but it's absolutely usable.

With a bit more work you can configure a local LLM to access your own private documents, which means you can chat with your notes, company docs, whatever.

It's a little bit involved to do this with Ollama but I did try a similar product called GPT4All which makes this easier. Unfortunately I found that the software is quite clunky and slow. It does work though, and it's under active development, so you may like to try it.

The default way to access Ollama is via a command line so you may like to use an LLM interface such as BoltAI if you have a Mac, otherwise you can try Backyard for Windows.

Using LLM's is definitely one way to improve your productivity but, if you're like me, you want to incorporate your private documents without giving them to a remote corporation.

I'm excited to see how the local LLM space evolves!

Tech Guidance for Non-Technical Founders

A daily newsletter on building software products for non-technical founders. Give me two minutes a day, and I’ll help you make technical decisions with confidence.

Share this page