From 4c1c0bc8b07350dceae38de19323cb60bdb702b1 Mon Sep 17 00:00:00 2001
From: Vadim Pushtaev <pushtaev.vm@gmail.com>
Date: Wed, 1 Mar 2023 21:10:48 +0200
Subject: [PATCH] intro-to-pytorch: part 3, links fix

---
 .../Part 3 - Training Neural Networks (Exercises).ipynb       | 4 ++--
 .../Part 3 - Training Neural Networks (Solution).ipynb        | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/intro-to-pytorch/Part 3 - Training Neural Networks (Exercises).ipynb b/intro-to-pytorch/Part 3 - Training Neural Networks (Exercises).ipynb
index 3e837eac5c..cdd6aa53ab 100644
--- a/intro-to-pytorch/Part 3 - Training Neural Networks (Exercises).ipynb	
+++ b/intro-to-pytorch/Part 3 - Training Neural Networks (Exercises).ipynb	
@@ -64,7 +64,7 @@
     "\n",
     "Let's start by seeing how we calculate the loss with PyTorch. Through the `nn` module, PyTorch provides losses such as the cross-entropy loss (`nn.CrossEntropyLoss`). You'll usually see the loss assigned to `criterion`. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels.\n",
     "\n",
-    "Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss),\n",
+    "Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss),\n",
     "\n",
     "> This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class.\n",
     ">\n",
@@ -153,7 +153,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilities by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).\n",
+    "In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/generated/torch.nn.LogSoftmax.html#torch.nn.LogSoftmax)). Then you can get the actual probabilities by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).\n",
     "\n",
     ">**Exercise:** Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss. Note that for `nn.LogSoftmax` and `F.log_softmax` you'll need to set the `dim` keyword argument appropriately. `dim=0` calculates softmax across the rows, so each column sums to 1, while `dim=1` calculates across the columns so each row sums to 1. Think about what you want the output to be and choose `dim` appropriately."
    ]
diff --git a/intro-to-pytorch/Part 3 - Training Neural Networks (Solution).ipynb b/intro-to-pytorch/Part 3 - Training Neural Networks (Solution).ipynb
index 20f6525171..a3dd00cbc1 100644
--- a/intro-to-pytorch/Part 3 - Training Neural Networks (Solution).ipynb	
+++ b/intro-to-pytorch/Part 3 - Training Neural Networks (Solution).ipynb	
@@ -64,7 +64,7 @@
     "\n",
     "Let's start by seeing how we calculate the loss with PyTorch. Through the `nn` module, PyTorch provides losses such as the cross-entropy loss (`nn.CrossEntropyLoss`). You'll usually see the loss assigned to `criterion`. As noted in the last part, with a classification problem such as MNIST, we're using the softmax function to predict class probabilities. With a softmax output, you want to use cross-entropy as the loss. To actually calculate the loss, you first define the criterion then pass in the output of your network and the correct labels.\n",
     "\n",
-    "Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss),\n",
+    "Something really important to note here. Looking at [the documentation for `nn.CrossEntropyLoss`](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss),\n",
     "\n",
     "> This criterion combines `nn.LogSoftmax()` and `nn.NLLLoss()` in one single class.\n",
     ">\n",
@@ -150,7 +150,7 @@
    "cell_type": "markdown",
    "metadata": {},
    "source": [
-    "In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.LogSoftmax)). Then you can get the actual probabilites by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).\n",
+    "In my experience it's more convenient to build the model with a log-softmax output using `nn.LogSoftmax` or `F.log_softmax` ([documentation](https://pytorch.org/docs/stable/generated/torch.nn.LogSoftmax.html#torch.nn.LogSoftmax)). Then you can get the actual probabilities by taking the exponential `torch.exp(output)`. With a log-softmax output, you want to use the negative log likelihood loss, `nn.NLLLoss` ([documentation](https://pytorch.org/docs/stable/nn.html#torch.nn.NLLLoss)).\n",
     "\n",
     ">**Exercise:** Build a model that returns the log-softmax as the output and calculate the loss using the negative log likelihood loss."
    ]