Get started with GPU acceleration for ML in WSL

Machine learning (ML) is becoming a key part of many development workflows. Whether you're a data scientist, ML engineer, or starting your learning journey with ML the Windows Subsystem for Linux (WSL) offers a great environment to run the most common and popular GPU accelerated ML tools.

There are lots of different ways to set up these tools. For example, NVIDIA CUDA in WSL, TensorFlow-DirectML and PyTorch-DirectML all offer different ways you can use your GPU for ML with WSL. To learn more about the reasons for choosing one versus another, see GPU accelerated ML training.

This guide will show how to set up:

  • NVIDIA CUDA if you have an NVIDIA graphics card and run a sample ML framework container
  • TensorFlow-DirectML and PyTorch-DirectML on your AMD, Intel, or NVIDIA graphics card


Setting up NVIDIA CUDA with Docker

  1. Download and install the latest driver for your NVIDIA GPU

  2. Install Docker Desktop or install the Docker engine directly in WSL by running the following command

    curl | sh
  3. If you installed the Docker engine directly then install the NVIDIA Container Toolkit following the steps below.

    Set up the stable repository for the NVIDIA Container Toolkit by running the following commands:

    distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
    curl -s -L | sudo apt-key add -
    curl -s -L$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list

    Install the NVIDIA runtime packages and dependencies by running the commands:

    sudo apt-get update
    sudo apt-get install -y nvidia-docker2
  4. Run a machine learning framework container and sample.

    To run a machine learning framework container and start using your GPU with this NVIDIA NGC TensorFlow container, enter the command:

    docker run --gpus all -it --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864

    TensorFlow with CUDA running inside a Docker container

    You can run a pre-trained model sample that is built into this container by running the commands:

    cd nvidia-examples/cnn/
    python --batch_size=64

    TensorFlow sample model training within Docker container

Additional ways to get setup and utilize NVIDIA CUDA can be found in the NVIDIA CUDA on WSL User Guide.

Setting up TensorFlow-DirectML or PyTorch-DirectML

  1. Download and install the latest driver from your GPU vendors website: AMD, Intel, or NVIDIA.

  2. Setup a Python environment.

    We recommend setting up a virtual Python environment. There are many tools you can use to setup a virtual Python environment — for these instructions, we'll use Anaconda's Miniconda.

    conda create --name directml python=3.7 -y
    conda activate directml
  3. Install the machine learning framework backed by DirectML of your choice.


    pip install tensorflow-directml


    sudo apt install libblas3 libomp5 liblapack3
    pip install pytorch-directml
  4. Run a quick addition sample in an interactive Python session for TensorFlow-DirectML or PyTorch-DirectML to make sure everything is working.

If you have questions or run into issues, visit the DirectML repo on GitHub.

Additional Resources