What makes us different from other similar websites? › Forums › Tech › Set up a Local AI like ChatGPT on your own machine!
Tagged: AI model, artificial intelligence, Installation, Linux, Ollama, Software setup, WebUI
- This topic has 7 replies, 1 voice, and was last updated 2 months, 2 weeks ago by
thumbtak.
-
AuthorPosts
-
September 24, 2024 at 3:55 pm #7623
thumbtak
ModeratorInstall on Linux
$ sudo apt install python3.11-venv $ python3.11 -m venv myenv $ source myenv/bin/activate $ pip install open-webui $ pip install async_timeout $ sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main $ sudo apt install snapd $ sudo snap install ollama $ ollama pull llama3.1:latest $ open-webui serve
Run on Linux, (after the above has been closed):
$ cd ~/myenv $ source myenv/bin/activate $ open-webui serve
Allow other on your network to use this:
Script that can be run each time you want to start the Web UI:
#!/bin/sh # Navigate to the home directory, or the directory where your virtual environment is cd ~/myenv || exit # If this fails, exit the script # Activate the virtual environment if [ -f "bin/activate" ]; then . bin/activate # Use '.' (dot) instead of 'source' in sh else echo "Virtual environment not found. Exiting." exit 1 fi # Run open-webui open-webui serve # Notify user the server is up echo "Activated, open at http://localhost:8080"
Save as a sh file and run with
$ sh file.sh
command.More Information:
https://github.com/open-webui/open-webui
-
This topic was modified 11 months, 1 week ago by
thumbtak. Reason: Removed sudo from the pip commands as "Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager."
-
This topic was modified 11 months ago by
thumbtak. Reason: I removed s command by putting a strickthrough
-
This topic was modified 11 months ago by
thumbtak. Reason: Removed repeated command
-
This topic was modified 2 months ago by
thumbtak. Reason: Added two missing commands when installing
September 25, 2024 at 3:45 pm #7624thumbtak
ModeratorUpgrade Open-WebUI:
Start Environment:
$ python3.11 -m venv myenv $ source myenv/bin/activate
Update Pip:
$ pip install --upgrade pip
Update Open WebUI:
$ pip install --upgrade open-webui
Restart the Server:
$ open-webui serve
Note: Make sure it is not already running.
-
This reply was modified 11 months, 1 week ago by
thumbtak. Reason: Updated the command, to working. Update should now work
-
This reply was modified 10 months, 4 weeks ago by
thumbtak. Reason: Remove command replaced with uninstall command
-
This reply was modified 9 months, 1 week ago by
thumbtak. Reason: Update command fixed
October 8, 2024 at 4:52 pm #7634thumbtak
ModeratorRetraining vs RAV vs Contract: Your Local Data on LLMs!
November 26, 2024 at 11:11 am #7662thumbtak
ModeratorThe commands were updates as the old ones didn’t work.
November 26, 2024 at 11:55 am #7663thumbtak
ModeratorIf you want to run Ollama in a terminal window, use the following command.
$ ollama run model
Example:
$ ollama run mistral
$ ollama run mistral:7b-instruct-v0.2-fp16
$ ollama run mistral-large
December 30, 2024 at 10:54 pm #7689thumbtak
ModeratorIf you get this error message, running the following command …
$ python3.11 -m venv myenv
$ python3.11 -m venv myenv The virtual environment was not created successfully because ensurepip is not available. On Debian/Ubuntu systems, you need to install the python3-venv package using the following command. apt install python3.11-venv You may need to use sudo with that command. After installing the python3-venv package, recreate your virtual environment. Failing command: ['/home/user/myenv/bin/python3.11', '-Im', 'ensurepip', '--upgrade', '--default-pip']
Do the following …
$ sudo add-apt-repository ppa:deadsnakes/ppa $ sudo apt update $ sudo apt upgrade $ sudo apt install -f $ sudo apt autoremove $ sudo apt install python3.11
After this, you may have to delete your environment or rename it for backing it up. The data should be saved, minus the webui settings, and all the chats. Keep this in mind. After you delete the environment (or rename) folder, you go back and reinstall the webui.
Note: Some of the install will fail as your models will be already setup. You can remove it to restart fresh, or leave it the same. This is your choice.
-
This reply was modified 8 months ago by
thumbtak.
January 29, 2025 at 11:02 pm #7699thumbtak
ModeratorHow to install Deepseek R1 on your computer, and run locally
$ sudo apt install python3.11-venv $ python3.11 -m venv myenv $ source myenv/bin/activate $ pip install open-webui $ pip install async_timeout $ sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main $ ollama run deepseek-r1 $ open-webui serve
June 17, 2025 at 7:45 pm #8078thumbtak
ModeratorHow to Get Shell GPT (sgpt) Working with Local Ollama Models on Kali Linux
Hey everyone!
I wanted to share a comprehensive guide on how I got
sgpt
(Shell GPT) to work with local Ollama models on my Kali Linux setup. This is super useful for privacy, control, and avoiding those recurring cloud API costs. It was a bit of a journey, but here are all the steps that finally got it working for me!
Goal
To use
sgpt
from the command line to interact with Large Language Models (LLMs) hosted locally via Ollama.
I. Initial Setup: Kali Linux, Ollama, and
shell-gpt
Installation- Ensure Kali Linux is installed and updated:
- This is your base operating system.
- (Assumed you have this ready!)
- Install Ollama: This command downloads and installs the Ollama server, which is essential for running LLMs locally.
curl -fsSL https://ollama.com/install.sh | sh
- Download your desired LLM(s) using Ollama: This pulls the model files onto your system. I used
mistral
as an example, but you can choose others from Ollama’s library.
ollama pull mistral # Or 'ollama pull llama3', 'ollama pull codellama', etc.
- Verify Ollama is running:
systemctl status ollama
(Expected output will showActive: active (running)
)
- Verify Ollama is running:
- Install
shell-gpt
with LiteLLM support: LiteLLM is crucial because it enablessgpt
to communicate with various LLM providers, including Ollama’s local API.
sudo apt update
sudo apt install pipx
pipx ensurepath
pipx install "shell-gpt[litellm]"
- Initial
sgpt
run (and dummy API key entry): When you first runsgpt
after installation, it might prompt for an OpenAI API key. I entered a dummy string (e.g.,dummykey123
,potato
) and pressed Enter to bypass this. This allowssgpt
to create its initial configuration file.
- Initial
II. Critical Configuration Adjustments for Ollama Integration
This was the trickiest part! It involves precisely editing
sgpt
‘s configuration file (~/.config/shell_gpt/.sgptrc
) to direct it to Ollama.- Open the
shell-gpt
configuration file usingsudo nano
: Usingsudo nano
is vital to ensure you have the necessary permissions to save changes, especially if the file was previously owned byroot
.
sudo nano ~/.config/shell_gpt/.sgptrc
(Enter your password when prompted.) - Make these precise edits in the
nano
editor:DEFAULT_MODEL
: Set this to your local Ollama model.
DEFAULT_MODEL=ollama/mistral # Make sure 'mistral' matches your downloaded model
OPENAI_USE_FUNCTIONS
: Disable OpenAI-specific function calls as Ollama doesn’t use them in the same way.
OPENAI_USE_FUNCTIONS=false
USE_LITELLM
: Enable LiteLLM as the backend. This is essential for Ollama compatibility.
USE_LITELLM=true
LITELLM_API_BASE
: Explicitly tell LiteLLM where your local Ollama server is running (default ishttp://localhost:11434
).
LITELLM_API_BASE=http://localhost:11434
OPENAI_API_KEY
: Keep your dummy key here.sgpt
still expects a value in this field, but LiteLLM will ignore it when routing to Ollama.
OPENAI_API_KEY=dummykey123 # Or whatever dummy key you used
- CRITICAL: REMOVE / DELETE THIS LINE: This was the most persistent and problematic line that kept causing the OpenAI
AuthenticationError
. Ensure it is COMPLETELY GONE from the file. Do not just comment it out for now; delete it entirely if it’s present.
# API_BASE_URL=default
(If you found this line, delete it. If it was never there, great!)
- Save the file in
nano
:- Press Ctrl+O (Write Out)
- Press Enter to confirm the filename
- Press Ctrl+X (Exit)
- Verify the changes (Crucial for troubleshooting): Immediately after saving, I used
cat
to confirm the file content from the terminal. This helped ensure the edits actually stuck.
cat ~/.config/shell_gpt/.sgptrc
III. Resolving the
PermissionError
The final hurdle was a
PermissionError
whensgpt
tried to run. This happened becausesudo nano
had setroot
as the owner of the config file, preventing my regular user from writing to it.- Change the ownership of the config file and its directory back to your user (
USER
in my case):
sudo chown USER:USER /home/USER/.config/shell_gpt/.sgptrc
sudo chown USER:USER /home/USER/.config/shell_gpt/
(ReplaceUSER
with your actual Kali username if it’s different). - Verify ownership (optional):
ls -l /home/USER/.config/shell_gpt/.sgptrc
ls -ld /home/USER/.config/shell_gpt/
(Expected output for owner and group:yourusername yourusername
)
IV. Final Test
- Run
sgpt
with a query:
sgpt "Tell me a joke about cybersecurity."
This should now work, withsgpt
communicating successfully with your local Ollama model!
Hopefully, this detailed guide helps others facing similar issues! It’s rewarding to get a powerful local AI assistant working right in your Kali terminal.
Current time: 2025-06-17 08:46:18 PM EDT
-
This reply was modified 2 months, 2 weeks ago by
thumbtak.
-
This topic was modified 11 months, 1 week ago by
-
AuthorPosts
- You must be logged in to reply to this topic.