How to Run Deepseek R1 Locally
What is Deepseek R1?
Deepseek R1 is an open-source large language model (LLM) built for tasks like conversational AI, code generation, and natural language understanding. It can be used for:
Text generation – It can be used to write articles, summaries, and more.
Code assistance – It can generate and debug code.
Natural language understanding – It helps to analyze and interpret human input.
Question-answering – It helps to provide context-based responses.
By running Deepseek R1 locally, you eliminate dependency on cloud services and gain full control over your workflow.
Now, let’s explore how to set up Deepseek R1 on your machine using Ollama.
How to set up Deepseek R1 locally using Ollama
Before we begin with the installation process, let’s first understand what Ollama is and why it is the preferred choice for running Deepseek R1 locally.
What is Ollama?
Ollama is a lightweight tool designed to simplify the process of running AI models locally. It offers:
Quick setup – It requires minimal installation steps. You can have your AI model up and running in no time.
Optimized resource usage – It efficiently manages memory, ensuring smooth performance.
Local inference – No internet connection is needed after setup.
Now that we understand what Ollama is, let’s explore why it’s beneficial to use it to run Deepseek R1.
Why use Ollama?
Running Deepseek R1 with Ollama provides several advantages:
Privacy – Your data stays on your device.
Performance – Faster response times without cloud delays.
Customization – Ability to tweak model behavior for specific tasks.
With these benefits in mind, let’s move on to the installation process.
How to install Ollama
Follow these steps to install Ollama on your system:
macOS
Open Terminal and run:
brew install ollamaIf Homebrew isn’t installed, visit brew.sh and follow the setup instructions.
Windows & Linux
Download Ollama from the official website.
Follow the installation guide for your operating system.
Alternatively, Linux users can install it via Terminal:
curl -fsSL https://ollama.com/install.sh | sh
With Ollama successfully installed, let’s move on to using Deepseek R1 on your local machine.
Running Deepseek R1 on Ollama
Once Ollama is installed, follow these steps to set up Deepseek R1 locally:
Step 1: Download the Deepseek R1 model
To begin using Deepseek R1, you first need to download the model. Run the following command in the terminal to download Deepseek R1:
ollama pull deepseek-r1
For a smaller version, specify the model size:
ollama pull deepseek-r1:1.5b

After downloading the model, you’re ready to start using it.
Step 2: Start the model
Now that you have the model downloaded, you need to start the Ollama server to run Deepseek R1. Use the following command:
ollama serve
Then, run DeepseekR1:
ollama run deepseek-r1
To use a specific version:
ollama run deepseek-r1:1.5b
Step 3: Interact with Deepseek R1
With the model running, you can now interact with it in the terminal. Try entering queries like the following:
ollama run deepseek-r1 "What is a class in C++?."

Try experimenting with different prompts. It will help you understand the model’s strengths and how it can best serve your needs.
Conclusion
Running Deepseek R1 locally using Ollama offers a powerful and private AI solution. By following this guide, you can install, configure, and interact with Deepseek R1 seamlessly on your local machine. Whether for text generation, coding, or knowledge retrieval, Deepseek R1 provides an efficient AI experience without relying on cloud-based services.
If you want to deepen your understanding of AI models and their applications, check out Codecademy’s Generative AI for Everyone course to expand your knowledge and enhance your AI development skills.
'The Codecademy Team, composed of experienced educators and tech experts, is dedicated to making tech skills accessible to all. We empower learners worldwide with expert-reviewed content that develops and enhances the technical skills needed to advance and succeed in their careers.'
Meet the full teamRelated articles
- Article
Setup and Fine-Tune Qwen 3 with Ollama
Learn how to set-up, fine-tune, and use the Qwen 3 LLMs using Ollama with step-by-step instructions and examples. - Article
How to Run Llama 3 Locally
Learn how to run Llama 3 locally using GPT4ALL and Ollama. Follow this step-by-step guide to set up Llama 3 for offline access, privacy, and customization. - Article
Building Visual RAG Pipelines with Llama 3.2 Vision & Ollama
Explore how to build multimodal RAG pipelines using LLaMA 3.2 Vision and Ollama for intelligent document understanding and visual question answering.
Learn more on Codecademy
- Learn DeepSeek AI and reasoning models including DeepSeek V3, R1-Zero, distilled models, AI model performance, access methods, and deep learning.
- Beginner Friendly.< 1 hour
- Explore Generative AI Studio on GCP. Learn language model training, tuning, performance evaluation, deployment, and speech-to-text conversion.
- Intermediate.2 hours
- Ready to learn how to use AI for coding? Learn how to use generative AI tools like ChatGPT to generate code and expedite your development.
- Beginner Friendly.1 hour