{ "cells": [ { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "'''\n", "PyTorch and NumPy will be the default frameworks which we will use for the\n", "project and the exercises.\n", "\n", "If you do not have any experience with them, we advise you to familiarize\n", "yourself ahead of the project. In this notebook we provide high level\n", "examples of how PyTorch tensors and NumPy arrays work.\n", "\n", "For further tutorials, we refer you to:\n", "https://pytorch.org/tutorials/\n", "https://numpy.org/doc/stable/\n", "'''\n", "import torch\n", "import numpy as np" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create a 1D tensor of dimension 3, filled with ones. The standard type of\n", "# the tensor (and in turn the elements stored in it) is torch.float or\n", "# torch.float32. This can be checked by inspecting x.dtype.\n", "x = torch.ones(3)\n", "\n", "# Create a 1D tensor of dimension 3, filled with random numbers drawn from\n", "# the standard normal distribution, N(0, 1).\n", "rnd = torch.randn(3)\n", "print(x, x.dtype)\n", "print(rnd)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Create a NumPy array from a Python list. The type of the array is inferred\n", "# from the elements provided in its construction, int64 in this case.\n", "np_vector = np.array([1, 2, 3])\n", "\n", "# Create a PyTorch tensor from a NumPy array and explicitly convert it to a\n", "# float type (otherwise, the tensor takes the type of the array).\n", "y = torch.from_numpy(np_vector).float()" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# requires_grad is a tensor property which indicates whether to automatically\n", "# record the operations on the corresponding tensor. This is important e.g. for\n", "# backpropagation and for obtaining derivatives w.r.t. particular variables.\n", "\n", "# When creating new tensors (and they do not result from the computation of\n", "# other tensors), requires_grad is typically False. Here we explicitly enable it\n", "# (which will allow us to compute gradients for these tensors later).\n", "\n", "x.requires_grad_(True)\n", "y.requires_grad_(True)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Multiplication, summation and most of the standard arithmetic operations are\n", "# usually performed element-wise.\n", "z1 = x * rnd\n", "z2 = z1 + torch.log(y) + x\n", "\n", "# z1 and z2 have requires_grad enabled because x and (or) y have it." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "print(z1, z2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Compute the sum of all elements in the tensor.\n", "sum_z2 = torch.sum(z2)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# PyTorch (and TensorFlow as well) are organized around building (and representing)\n", "# computations (such as neural networks) in the form of computation graphs. Running\n", "# the backward() method (typically applied on a single scalar, e.g. representing a\n", "# loss function) makes PyTorch compute the gradients w.r.t. the graph leaves.\n", "\n", "sum_z2.backward()\n", "\n", "# The computed gradients can be accessed in the grad attribute. Future calls to\n", "# backward() will accumulate (add) gradients into it. If we don't want this to\n", "# happen, we can zero them beforehand.\n", "print(x.grad, y.grad)\n", "\n", "# Running backward() twice without proper care between the runs can sometimes be\n", "# problematic." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# Return a (deep) copy of the tensor. The copy has the same size and data type as\n", "# the original but lives in its own memory space.\n", "tmp = x.clone()\n", "\n", "# norm() computes the norm of the tensor. The default is set to the Frobenius norm\n", "# by the PyTorch API, but for 1D tensors, it is equivalent to the 2-norm.\n", "while tmp.norm() < 10:\n", " print(tmp)\n", " tmp *= 1.1 # again, element-wise" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 2 }