How to Run DeepSeek R1 Locally: Step-by-Step Process
You’ve probably heard the buzz about DeepSeek R1. It’s an open-source AI model beingcompared to top-tier proprietary models like OpenAI’s o1. More than that, it’s a reasoning model meaning it uses a chain of thought processes to analyze problems and its own answers logically and then slowly arrive at an answer. This approach helps the AI model generate more accurate responses, solving complex questions that require serious reasoning skills. As it’s an open-source model, for the first time, you can install a reasoning AI model on your PC and run it offline. No need to worry about privacy.
In this guide, I’ll show you how to set up DeepSeek R1 locally, even if this is your first time and you’re new to running AI models. The steps are the same for Mac, Windows, or Linux.

Models You Can Install and Pre-requisites
DeepSeek R1is available in different sizes. While running the largest 671B parameter model isn’t feasible for most machines, smaller,distilled versions can be installedlocally on your PCs. Note that running AI models locally is resource-intensive requiring storage space, RAM, and GPU power. Each model has specific hardware requirements and here’s a quick overview:
It is better if you have more RAM. In fact, we recommend you add powerful RAM if possible to get better results.

Pro Tip:Just starting out and confused about which R1 model to install. we recommend you try installing thesmallest 1.5B parameters model(the first one in the table above) —it’s lightweight and easy to test.
How to Install DeepSeek R1 Locally
There are different ways to install and run the DeepSeek models locally on your computer. We will share a few easy ones here.
Pro Tip:We recommendOllama and Chatbox methodsif you are just starting out and want an easy way to install DeepSeek R1 model or any AI model for that matter.

Method 1: Installing R1 Using Ollama and Chatbox
This is the easiest way to get started, even for beginners.
Step 1: Install Ollama
- Go to theOllama websiteand download the installer for your operating system (Mac, Windows, or Linux). Run the installer and follow the on-screen instructions. 
- Once installed,open Terminalto confirm it’s working. Copy-paste the command below. 

You should see a version number appear, which means Ollama is ready to use.
Step 2: Download and Run the DeepSeek R1 Model

1.Give the following command in Terminal. Replace [model size] with the model of the AI you want to install. For example, for 1.5B parameter model, run this command:ollama run deepseek-r1:1.5b.
2.Wait for the model to download. You’ll see progress in the terminal.
3.Once downloaded, the model will start running. You can interact with it directly from the Terminal. Moving forward, you can use the same Terminal command to chat with DeepSeek R1 AI model.
Now, we will show how to install Chatbox for a user-friendly interface.
Step 3: Install Chatbox
1.Download Chatbox from itsofficial website. Install and open the app. You’ll see a simple, user-friendly interface.
- In Chatbox, go toSettingsby clicking on the cog icon in the sidebar.
3.Set the Model Provider toOllama.
4.Set the API host to:
5.Select the DeepSeek R1 model (e.g.,deepseek-r1:1.5b) from the dropdown menu.
6.HitSaveand start chatting.
Method 2: Using Ollama and Docker
This method is great if you want to run the model in a Docker container.
Step 1: Install Docker
1.Go to theDocker websiteand downloadDocker Desktop for your OS. Install Docker by following the on-screen instructions.
2.Open the app and log in with the service.
3.Type the command below in the Terminal to run it.
You should see a version number appear meaning that the Docker is installed.
Step 2: Pull the Open WebUI Image
1.Open your terminal and type:
2.This will download the necessary files for the interface.
Step 3: Run the Docker ContainerandOpen WebUI
1.Start the Docker container with persistent data storage and mapped ports by running:
2.Wait a few seconds for the container to start.
3.Open your browser and go to:
4.Create an account as prompted, and you’ll be redirected to the main interface. At this point, there will be no models available for selection.
Step 5: Set Up Ollama and Integrate DeepSeek R1
1.Visit theOllama websiteand download/install it.
2.In the terminal, download the desired DeepSeek R1 model by typing:
3.Refresh the Open WebUI page in your browser. You’ll see the downloaded DeepSeek R1 model (e.g.,deepseek-r1:8b) in the model list.
4.Select the model and start chatting.
Method 3: Using LM Studio
Works great if you don’t want to use the Terminal to interact with DeepSeek locally. However, LM Studio currently only supports Qwen 7B and 8B models. So if you want to install 1.5B or even higher models like 32B, this method will not work for you.
1.Download LM Studio from itsofficial website. Install and launch the application.
2.Click on thesearch iconin the sidebar and search for the DeepSeek R1 model (e.g., deepseek-r1-distill-llama-8b).
3.ClickDownloadand wait for the process to complete.
4.Once downloaded, click on thesearch barat the top of the LM Studio homepage.
5.Select the downloaded model and load it.
6.That’s it. Type yourpromptin the text box and hitEnter. The model will generate a response.
Final Thoughts
Running DeepSeek R1 locally offers privacy, cost savings, and the flexibility to customize your AI setup.
If you’re new to this, start with Ollama and Chatbox for a simple setup. Docker is ideal for users familiar with containerization, while LM Studio works best for those avoiding terminal commands. Try a smaller model like the 8B or 1.5B to get started, and scale up as you go.
Ravi Teja KNTS
Tech writer with over 4 years of experience at TechWiser, where he has authored more than 700 articles on AI, Google apps, Chrome OS, Discord, and Android. His journey started with a passion for discussing technology and helping others in online forums, which naturally grew into a career in tech journalism. Ravi’s writing focuses on simplifying technology, making it accessible and jargon-free for readers. When he’s not breaking down the latest tech, he’s often immersed in a classic film – a true cinephile at heart.