Translated by ChatGPT.
I'm about to attend a summer school on deep learning applications in microscopic image processing.
Deep learning is a branch of machine learning, where most data can be represented as vectors (first-order tensors) and most algorithms can be broken down into operations or transformations on vectors, represented as matrices (second-order tensors). This need for efficient tensor computation led to libraries like PyTorch, TensorFlow, and others such as Keras, Caffe, and more. In earlier versions, PyTorch and TensorFlow differed significantly, but over time, they've merged in features and have edged out other competitors; they're starting to look increasingly similar. "They've become what they once hated." lol
I'm not sure which machine learning framework will be used in the course, so I've decided to install both PyTorch and TensorFlow (shrug). This is also a continuation of my previous Python tutorial.
Here’s my software and hardware setup: an Intel CPU with integrated graphics (can't run Civilization VI). Even though my system isn't officially supported on Linux, I’ve managed to get Fedora running smoothly after a few kernel updates. I'm using Python version 3.9.6, pip as my package manager, and VS Code as my editor.
Creating a Virtual Environment
I explained why to create a virtual environment in the previous post, so let's jump right in. I created two virtual environments, naming them torch
and tf
. In the command line snippets, I've added a prompt (env)[me@mycomputer]$
to show which environment I'm in—remove this prompt when copying the commands.
[me@mycomputer]$ mkvirtualenv torch
(torch)[me@mycomputer]$
Installation
On the PyTorch website, find the installation command for your setup: https://pytorch.org/get-started/locally/. In my case, I chose Stable
> Linux
> Pip
> Python
> CPU
. Copy the generated command and paste it into the terminal:
(torch)[me@mycomputer]$ pip3 install torch==1.9.1+cpu torchvision==0.10.1+cpu torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
Wait for the installation messages to confirm success.
Verification and Deactivation
Verify the installation using the official guide:
import torch
x = torch.rand(5, 3)
print(x)
# tensor([[0.3799, 0.4494, 0.4296],
# [0.5800, 0.0180, 0.3110],
# [0.9847, 0.0125, 0.2648],
# [0.0296, 0.3142, 0.9266],
# [0.3192, 0.9645, 0.5545]])
Next, deactivate the environment to prepare for installing TensorFlow:
(torch)[me@mycomputer]$ deactivate
[me@mycomputer]$
What About TensorFlow?
Initially, I planned to cover both PyTorch and TensorFlow in this post, but TensorFlow proved a bit more challenging.
Direct Installation
Go to the TensorFlow website and click the "Install" button in the top navigation. Then, on the left sidebar, find the package/pip section; the installation command is just one line:
pip install --upgrade tensorflow
However, this command didn’t work as expected. The installation completed without errors, but running a test produced numerous error messages.
The error log included Could not load dynamic library 'libcudart.so.11.0'
, suggesting that this command installed the GPU version by mistake.
Installing the CPU Version
I found the CPU version download link in the "Package Location" section of the page: https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow_cpu-2.6.0-cp39-cp39-manylinux2010_x86_64.whl
. I deleted the virtual environment, created it again, and reinstalled TensorFlow:
(tf)[me@mycomputer]$ deactivate
[me@mycomputer]$ rmvirtualenv tf
[me@mycomputer]$ mkvirtualenv tf
(tf)[me@mycomputer]$ pip install --upgrade pip
(tf)[me@mycomputer]$ pip install https://storage.googleapis.com/tensorflow/linux/cpu/tensorflow_cpu-2.6.0-cp39-cp39-manylinux2010_x86_64.whl
After testing with the official command, I still got warnings:
(tf) [shixing@yoga-laptop ~]$ python -c "import tensorflow as tf;print(tf.reduce_sum(tf.random.normal([1000, 1000])))"
20XX-XX-XX XX:XX:XX.XXXXXX: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
tf.Tensor(-1338.4773, shape=(), dtype=float32)
This Stack Overflow answer mentions that to avoid this warning, TensorFlow needs to be compiled from source. The official guide to building from source provides more details, but this is quite a hassle, so I’ll cover it in another post (to be continued).
See also: