ETC5513: Collaborative and Reproducible Practices
Workshop 11
Workshop Goals
By the end of this workshop, you’ll:
- Have Docker running on your own machine
- Run RStudio Server in a container
- Launch your own local chatbot using
ollamaandopen-webui - Understand how to use Docker Compose to manage complex environments
Part 1: Set up Docker
1. Install Docker Desktop
- Go to https://www.docker.com/products/docker-desktop
- Download Docker Desktop for your operating system (Mac, Windows, or Linux)
- Follow the installation instructions
- Launch Docker Desktop
To test your setup, run this in the terminal:
docker run hello-worldIf you see a “Hello from Docker!” message, you’re good to go!
Part 2: Run RStudio Server with Docker Compose
You’ll use a Docker Compose file to spin up a full RStudio Server container.
- Create a new folder for this project and add this file as
docker-compose.yml:
services:
rstudio:
image: rocker/rstudio
ports:
- "8787:8787"
environment:
- PASSWORD=rstudio
volumes:
- .:/home/rstudio
container_name: rstudio-docker- From the terminal in that folder, run:
docker compose up -d- Open your browser and go to:
http://localhost:8787
- Username:
rstudio - Password:
rstudio
🧪 Try making a small script or loading a package!
To stop the container:
docker compose downPart 3: Run a Local Chatbot with Open WebUI + Ollama (Bundled)
In this section, we will run a local chatbot using ollama and open-webui.
1. Create a new folder and inside it, add this docker-compose.yml:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:ollama
ports:
- 3000:8080
volumes:
- open-webui-data:/app/backend/data
volumes:
open-webui-data:This setup includes Ollama already inside the container and will download models as needed.
2. Run the chatbot:
From the terminal in that folder:
docker compose up -dOnce it finishes setting up, visit:
http://localhost:3000
Set up a local account, and select model llama3.2. This will take some time to download!
3. Try it out!
- Ask the model some questions.
- Open multiple tabs to simulate multiple sessions.
- Try using larger models like
llavaorcodellamaif you’re curious (and have RAM to spare).
To stop the server:
docker compose downNote: This bundled version is great for local experimentation, and includes both the LLM and a clean interface. It is entirely possible to run ollama separately from open-webui:
You do not need to do this step in the workshop
1. Install Ollama
- Go to https://ollama.com and install for your OS
2. Run ollama in the background:
ollama serveThen pull a model like:
ollama run llama2Keep this running!
3. Set up open-webui with Docker Compose
Create another folder and add this docker-compose.yml:
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- 3000:3000
volumes:
- open-webui-data:/app/backend/data
environment:
- OLLAMA_BASE_URL=http://host.docker.internal:11434
volumes:
open-webui-data:Then run:
docker compose up -dVisit: http://localhost:3000