loader image

Set up a Local AI like ChatGPT on your own machine!

What makes us different from other similar websites? Forums Tech Set up a Local AI like ChatGPT on your own machine!

Viewing 8 posts - 1 through 8 (of 8 total)
  • Author
    Posts
  • #7623
    thumbtak
    Moderator

    Install on Linux

    $ sudo apt install python3.11-venv
    
    $ python3.11 -m venv myenv
    
    $ source myenv/bin/activate
    
    $ pip install open-webui
    
    $ pip install async_timeout
    
    $ sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    
    $ sudo apt install snapd
    
    $ sudo snap install ollama
    
    $ ollama pull llama3.1:latest
    
    $ open-webui serve

    http://localhost:8080

    Run on Linux, (after the above has been closed):

    $ cd ~/myenv
    
    $ source myenv/bin/activate
    
    $ open-webui serve

    http://localhost:8080

    Allow other on your network to use this:

    http://your-ip:8080

    Script that can be run each time you want to start the Web UI:

    #!/bin/sh
    
    # Navigate to the home directory, or the directory where your virtual environment is
    cd ~/myenv || exit # If this fails, exit the script
    
    # Activate the virtual environment
    if [ -f "bin/activate" ]; then
    . bin/activate # Use '.' (dot) instead of 'source' in sh
    else
    echo "Virtual environment not found. Exiting."
    exit 1
    fi
    
    # Run open-webui
    open-webui serve
    
    # Notify user the server is up
    echo "Activated, open at http://localhost:8080"

    Save as a sh file and run with $ sh file.sh command.

    More Information:

    https://github.com/open-webui/open-webui

    https://ollama.com/

    • This topic was modified 11 months, 1 week ago by thumbtak. Reason: Removed sudo from the pip commands as "Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager."
    • This topic was modified 11 months ago by thumbtak. Reason: I removed s command by putting a strickthrough
    • This topic was modified 11 months ago by thumbtak. Reason: Removed repeated command
    • This topic was modified 2 months ago by thumbtak. Reason: Added two missing commands when installing
    #7624
    thumbtak
    Moderator

    Upgrade Open-WebUI:

    Start Environment:

    $ python3.11 -m venv myenv
    
    $ source myenv/bin/activate

    Update Pip:
    $ pip install --upgrade pip

    Update Open WebUI:
    $ pip install --upgrade open-webui

    Restart the Server:
    $ open-webui serve

    Note:  Make sure it is not already running.

    • This reply was modified 11 months, 1 week ago by thumbtak. Reason: Updated the command, to working. Update should now work
    • This reply was modified 10 months, 4 weeks ago by thumbtak. Reason: Remove command replaced with uninstall command
    • This reply was modified 9 months, 1 week ago by thumbtak. Reason: Update command fixed
    #7634
    thumbtak
    Moderator

    Retraining vs RAV vs Contract: Your Local Data on LLMs!

    #7662
    thumbtak
    Moderator

    The commands were updates as the old ones didn’t work.

    #7663
    thumbtak
    Moderator

    If you want to run Ollama in a terminal window, use the following command.

    $ ollama run model

    Example:
    $ ollama run mistral

    $ ollama run mistral:7b-instruct-v0.2-fp16

    $ ollama run mistral-large

    #7689
    thumbtak
    Moderator

    If you get this error message, running the following command …
    $ python3.11 -m venv myenv

    $ python3.11 -m venv myenv
    The virtual environment was not created successfully because ensurepip is not
    available. On Debian/Ubuntu systems, you need to install the python3-venv
    package using the following command.
    
    apt install python3.11-venv
    
    You may need to use sudo with that command. After installing the python3-venv
    package, recreate your virtual environment.
    
    Failing command: ['/home/user/myenv/bin/python3.11', '-Im', 'ensurepip', '--upgrade', '--default-pip']

     

    Do the following …

    $ sudo add-apt-repository ppa:deadsnakes/ppa
    $ sudo apt update
    $ sudo apt upgrade
    $ sudo apt install -f
    $ sudo apt autoremove
    $ sudo apt install python3.11

    After this, you may have to delete your environment or rename it for backing it up. The data should be saved, minus the webui settings, and all the chats. Keep this in mind.  After you delete the environment (or rename) folder, you go back and reinstall the webui.

    Note: Some of the install will fail as your models will be already setup. You can remove it to restart fresh, or leave it the same. This is your choice.

    • This reply was modified 8 months ago by thumbtak.
    #7699
    thumbtak
    Moderator

    How to install Deepseek R1 on your computer, and run locally

    $ sudo apt install python3.11-venv
    
    $ python3.11 -m venv myenv
    
    $ source myenv/bin/activate
    
    $ pip install open-webui
    
    $ pip install async_timeout
    
    $ sudo docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
    
    $ ollama run deepseek-r1
    
    $ open-webui serve
    #8078
    thumbtak
    Moderator

    How to Get Shell GPT (sgpt) Working with Local Ollama Models on Kali Linux


    Hey everyone!

    I wanted to share a comprehensive guide on how I got sgpt (Shell GPT) to work with local Ollama models on my Kali Linux setup. This is super useful for privacy, control, and avoiding those recurring cloud API costs. It was a bit of a journey, but here are all the steps that finally got it working for me!


    Goal

    To use sgpt from the command line to interact with Large Language Models (LLMs) hosted locally via Ollama.


    I. Initial Setup: Kali Linux, Ollama, and shell-gpt Installation

    1. Ensure Kali Linux is installed and updated:
      • This is your base operating system.
      • (Assumed you have this ready!)
    2. Install Ollama: This command downloads and installs the Ollama server, which is essential for running LLMs locally.
      curl -fsSL https://ollama.com/install.sh | sh
    3. Download your desired LLM(s) using Ollama: This pulls the model files onto your system. I used mistral as an example, but you can choose others from Ollama’s library.
      ollama pull mistral # Or 'ollama pull llama3', 'ollama pull codellama', etc.

      • Verify Ollama is running:
        systemctl status ollama
        (Expected output will show Active: active (running))
    4. Install shell-gpt with LiteLLM support: LiteLLM is crucial because it enables sgpt to communicate with various LLM providers, including Ollama’s local API.
      sudo apt update
      sudo apt install pipx
      pipx ensurepath
      pipx install "shell-gpt[litellm]"

      • Initial sgpt run (and dummy API key entry): When you first run sgpt after installation, it might prompt for an OpenAI API key. I entered a dummy string (e.g., dummykey123, potato) and pressed Enter to bypass this. This allows sgpt to create its initial configuration file.

    II. Critical Configuration Adjustments for Ollama Integration

    This was the trickiest part! It involves precisely editing sgpt‘s configuration file (~/.config/shell_gpt/.sgptrc) to direct it to Ollama.

    1. Open the shell-gpt configuration file using sudo nano: Using sudo nano is vital to ensure you have the necessary permissions to save changes, especially if the file was previously owned by root.
      sudo nano ~/.config/shell_gpt/.sgptrc
      (Enter your password when prompted.)
    2. Make these precise edits in the nano editor:
      • DEFAULT_MODEL: Set this to your local Ollama model.
        DEFAULT_MODEL=ollama/mistral # Make sure 'mistral' matches your downloaded model
      • OPENAI_USE_FUNCTIONS: Disable OpenAI-specific function calls as Ollama doesn’t use them in the same way.
        OPENAI_USE_FUNCTIONS=false
      • USE_LITELLM: Enable LiteLLM as the backend. This is essential for Ollama compatibility.
        USE_LITELLM=true
      • LITELLM_API_BASE: Explicitly tell LiteLLM where your local Ollama server is running (default is http://localhost:11434).
        LITELLM_API_BASE=http://localhost:11434
      • OPENAI_API_KEY: Keep your dummy key here. sgpt still expects a value in this field, but LiteLLM will ignore it when routing to Ollama.
        OPENAI_API_KEY=dummykey123 # Or whatever dummy key you used
      • CRITICAL: REMOVE / DELETE THIS LINE: This was the most persistent and problematic line that kept causing the OpenAI AuthenticationError. Ensure it is COMPLETELY GONE from the file. Do not just comment it out for now; delete it entirely if it’s present.
        # API_BASE_URL=default
        (If you found this line, delete it. If it was never there, great!)
    3. Save the file in nano:
      • Press Ctrl+O (Write Out)
      • Press Enter to confirm the filename
      • Press Ctrl+X (Exit)
    4. Verify the changes (Crucial for troubleshooting): Immediately after saving, I used cat to confirm the file content from the terminal. This helped ensure the edits actually stuck.
      cat ~/.config/shell_gpt/.sgptrc

    III. Resolving the PermissionError

    The final hurdle was a PermissionError when sgpt tried to run. This happened because sudo nano had set root as the owner of the config file, preventing my regular user from writing to it.

    1. Change the ownership of the config file and its directory back to your user (USER in my case):
      sudo chown USER:USER /home/USER/.config/shell_gpt/.sgptrc
      sudo chown USER:USER /home/USER/.config/shell_gpt/
      (Replace USER with your actual Kali username if it’s different).
    2. Verify ownership (optional):
      ls -l /home/USER/.config/shell_gpt/.sgptrc
      ls -ld /home/USER/.config/shell_gpt/
      (Expected output for owner and group: yourusername yourusername)

    IV. Final Test

    1. Run sgpt with a query:
      sgpt "Tell me a joke about cybersecurity."
      This should now work, with sgpt communicating successfully with your local Ollama model!

    Hopefully, this detailed guide helps others facing similar issues! It’s rewarding to get a powerful local AI assistant working right in your Kali terminal.


    Current time: 2025-06-17 08:46:18 PM EDT

    • This reply was modified 2 months, 2 weeks ago by thumbtak.
Viewing 8 posts - 1 through 8 (of 8 total)
  • You must be logged in to reply to this topic.
TAKs Shack