Windows 10 with WSL 2 – Part 1: How to get started with GPU-accelerated models
In this article, I will explain how to enable GPU-accelerated models for Windows 10 with Windows Subsystem Linux 2. The article will be published in three parts: 1) What do I need to know before using GPU-accelerated models on my laptop; 2) The benefits of using WSL 2; and 3) In-depth install guide for WSL […]
In this article, I will explain how to enable GPU-accelerated models for Windows 10 with Windows Subsystem Linux 2.
The article will be published in three parts: 1) What do I need to know before using GPU-accelerated models on my laptop; 2) The benefits of using WSL 2; and 3) In-depth install guide for WSL 2 in case you run into problems.
I mostly work with data science and translating university papers to experimental algorithms. The groundwork is often tested on Linux only, because it is supposed to be run solely on Linux servers.
During the past five years, I have spent a significant amount of time debugging “Mac is not Linux” bugs (list of common issues in part 2) and Dockerizations trying to get around that (and fixing the Dockers).
I think we should bring Linux back as the Kernel against which we do local development. WSL 2 makes it possible.
What do I need to know before using GPU-accelerated models on my laptop? (Part 1)
The most important thing is that CUDA drivers are currently only available for Windows build 201xx, which is only available to you if you register for Microsoft Windows Insiders Program and subscribe to the “DevChannel updates”.
Read the end-user agreement carefully, as there are some important details about data collection for debugging, rollbacking to stable Windows 10 and warranty liabilities.
First, follow these four simple steps:
1. Register with the Microsoft Windows Insider Program and update your machine to the 201xx build.
2. Run PowerShell as an administrator, then run `wsl –install`. If everything goes as it should, you should now be able to install any Linux distro you want from the Microsoft Store.
4. To test-run everything, build this in WSL 2. Note that you probably need to run the make script for this also: /usr/local/cuda/samples/4_Finance/BlackScholes and run it (./BlackScholes)
Now that you have verified that your hardware is capable of running GPU-accelerated models, there are some things you should know about Tensorflow. I have seen model training pipelines deployed to production on expensive GPU machines that do not actually use the GPU acceleration, but fall back on CPU instead.
When Tensorflow fails to init a GPU, it will always gracefully fall back on CPU and some models work just fine, only very slowly.
Take note of 4 + 2 version numbers and use the right version of CUDA
There are 4 + 2 version numbers you should pay attention to. The first thing is the version number of Tensorflow in the requirements.txt of the model you are going to use. Then you need to make sure that you are using the correct Python version for it.
Next, you need to check that you have the right CUDA version installed (CUDA drivers are special machine learning drivers provided by Nvidia).
This should be quite easy, as it seems that CUDA drivers are highly backwards-compatible and bugs are rare. The final thing is the CuDNN drivers. CuDNN is what enables most of the neural networks’ accelerated computing with GPU, and this is where many people are having problems.
You should always test your setup with:
import tensorflow as tf print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
And you should make sure that this return `Num GPUs Available: X` is there, because if that line is missing, you have fallen back on the graceful CPU mode.
Luckily, Tensorflow has provided us with a nice list of tried and true version combinations, including the +2 for building from the source – you need the correct GCC and Bazel versions; building from the source can enable some CPU acceleration, which you might not need.
All this might seem a bit too cumbersome, however, and that is exactly what it is. Another alternative is to use Anaconda3:
wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh sh ./Anaconda3-2020.02-Linux-x86_64.sh
I tend to avoid adding extra stacks to my development but, in this case, Conda makes things nice and easy. With Conda, you should be all right so long as you remember to build your Conda environments with the correct Python version for your Tensorflow version. Python 3.7 is often a safe bet for all current versions of Tensorflow.
In case you run into problems with CuDNN, there is one magic trick that seems to help a lot of people: enable CUDNN_ALLOW_GROWTH
You should now be good to go. Happy GPU-accelerated development times!