Replies: 7 comments 2 replies
-
Reproducibility - Some of the statements in the time range [03:49:25] - [03:50:45] seem to imply that the seed only affects the first import torch
torch.manual_seed(42)
tensor_run1_A = torch.rand(2, 3)
tensor_run1_B = torch.rand(2, 3)
torch.manual_seed(42)
tensor_run2_A = torch.rand(2, 3)
tensor_run2_B = torch.rand(2, 3)
print(tensor_run1_A, tensor_run2_A, tensor_run1_A == tensor_run2_A, sep='\n')
print(tensor_run1_B, tensor_run2_B, tensor_run1_B == tensor_run2_B, sep='\n') For this example, it is necessary to call |
Beta Was this translation helpful? Give feedback.
-
Hello Daniel, It is Andrew from South Africa, on 03_pytorch_computer_vision chapter 3.3.Creating a training loop and training a model on batches of data, on line 17 of the code "y_pred = model_0(X)" is giving me an error here is the description of an error: "mat1 and mat2 shapes cannot be multiplied (32x784 and 748x10)" |
Beta Was this translation helpful? Give feedback.
-
I followed this video, code is exactly same but when i am trying this chunk of code, torch.manual_seed(42)An Epoch is one loop through the data...epochs = 1 TRAINING LOOPStep 0 >> Loop through the datafor epoch in range(epochs): Step 1 >> fORWARD PASSy_pred = model_0(X_train) Step 2 >> CALCULATE THE LOSSloss = loss_fn( y_pred, y_train ) Step 3 >> OPTIMIZER ZERO GRADoptimizer.zero_grad() Step 4 >> PERFORM BACKPROPAGATION ON THE LOSS WITH RESPECT TO THE PARAMETERS OF THE MODELloss.backward() Step 5 >> STEP THE OPTIMIZER (PERFORMS GRADIENT DESCENT)optimizer.step() ///////////////////////////////////////////////////////////////// i get error on this line, RuntimeError: Boolean value of Tensor with more than one value is ambiguous I am new learner, kindly help me where is the mistake and how to fix it |
Beta Was this translation helpful? Give feedback.
-
At 06:33:32 - you state that if we get to a gradient value of zero, the loss function will also be zero. Are you sure it will be zero, rather than just being a minimum? That f'(c) == 0 does not generally imply that f(c) == 0. |
Beta Was this translation helpful? Give feedback.
-
In 03_pytorch_computer_vision_video.ipynb, "# Create a sinlge conv2d layer" has the word "single" misspelled. |
Beta Was this translation helpful? Give feedback.
-
The 'Accuracy' module from torchmetrics requires: ...(task='multiclass', num_classes=num_classes) as compulsory arguments which was discussed at c.a. "13:54:12" |
Beta Was this translation helpful? Give feedback.
-
From the video 3:41:25 - NumPy Arrays and Torch Tensors can share memory. It looks like Test1: # Using array[0] = 9 to change the array
# updates both the array and tensor
array = np.arange(1.0, 9.0)
tensor = torch.from_numpy(array)
print(f"numpy.ndarray: {array}")
print(f"NumPy array location: {hex(id(array))}")
print(f"torch.Tensor: {tensor}")
print(f"torch.Tensor location: {hex(id(tensor))}")
array[0] = 9
# Array location unchanged
print(f"numpy.ndarray: {array}")
print(f"NumPy array location: {hex(id(array))}")
print(f"torch.Tensor: {tensor}")
print(f"torch.Tensor location: {hex(id(tensor))}") Out:
Test2: # Using array = array + 1 to change the array
# creates a new array and does not update the tensor
array = np.arange(1.0, 9.0)
tensor = torch.from_numpy(array)
print(f"numpy.ndarray: {array}")
print(f"NumPy array location: {hex(id(array))}")
print(f"torch.Tensor: {tensor}")
print(f"torch.Tensor location: {hex(id(tensor))}")
array = array + 1
# Array location changed
print(f"numpy.ndarray: {array}")
print(f"NumPy array location: {hex(id(array))}")
print(f"torch.Tensor: {tensor}")
print(f"torch.Tensor location: {hex(id(tensor))}") Out:
Test3: # Using array += to change the array
# updates both the array and tensor
array = np.arange(1.0, 9.0)
tensor = torch.from_numpy(array)
print(f"numpy.ndarray: {array}")
print(f"NumPy array location: {hex(id(array))}")
print(f"torch.Tensor: {tensor}")
print(f"torch.Tensor location: {hex(id(tensor))}")
array += 1
# Array location unchanged
print(f"numpy.ndarray: {array}")
print(f"NumPy array location: {hex(id(array))}")
print(f"torch.Tensor: {tensor}")
print(f"torch.Tensor location: {hex(id(tensor))}") Out:
Going the other way, we can check that a NumPy array created from a Tensor on the CPU can share memory: # Tensor on the CPU stays linked with the NumPy array created from it
tensor = torch.randint(low=1, high=9, size=(1,9), device=torch.device("cpu"))
array = tensor.numpy()
print(f"torch.Tensor: {tensor}")
print(f"torch.Tensor location: {hex(id(tensor))}")
print(f"numpy.ndarray: {array}")
print(f"NumPy array location: {hex(id(array))}")
tensor[0, 0] = 0
print(f"torch.Tensor: {tensor}")
print(f"torch.Tensor location: {hex(id(tensor))}")
print(f"numpy.ndarray: {array}")
print(f"NumPy array location: {hex(id(array))}") Out:
Trying to create a NumPy array from a Torch Tensor on the GPU (in my case, MPS) fails: # Tensor on the GPU cannot be used to create a NumPy array
tensor = torch.randint(low=1, high=9, size=(1,9), device=torch.device("mps"))
array = tensor.numpy()
print(f"torch.Tensor: {tensor}")
print(f"torch.Tensor location: {hex(id(tensor))}")
print(f"numpy.ndarray: {array}")
print(f"NumPy array location: {hex(id(array))}")
tensor[0, 0] = 0
print(f"torch.Tensor: {tensor}")
print(f"torch.Tensor location: {hex(id(tensor))}")
print(f"numpy.ndarray: {array}")
print(f"NumPy array location: {hex(id(array))}") Out:
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello hello!
The first~25 hours of the course are published as a single YouTube video here: https://youtu.be/Z_ikDlimN6A
Such a long video means there will likely be a few errors.
This discussion collects notes on them.
If you'd like to add one, feel free to leave a comment with a timestamp (such as "3:48:19") and what you think the correction should be (text explanations and code examples are welcome).
Corrections
torch.mm
!=torch.matmul
at [02:40:00] - I say these two functions are aliases of one another but this is not the case,torch.mm
does not perform broadcasting. See more here:torch.mm
!=torch.matmul
#391 thank you @AntonOfTheWoodsTensors and NumPy - If you make a tensor out of an array and then you change the array, the tensor will also change. I think that in your case (video time 3:38:50) the value of the tensor did not change because you changed the data type (from float 64 to float 32), that was the operation that created a new tensor in memory. From Rogelio Garcia.
requires_grad=False
by default - 5:09:14 requires_grad is False by default not True in the documentations. From Ameera Ali.Accumulating gradients - 6:29:12 I think it is accumulating the gradients because SDG is actually Mini-Batch Gradient Descent in Pytorch and the weights are going to be updated after the entire batch instead of after each training sample. From Ameera Ali.
RuntimeError: DataLoader worker - 22:25:56 If you are running this video on your local machine and getting issues with multiple workers, try removing
num_workers
fromtrain_dataloader_custom
andtest_dataloader_custom
. See more here Video 142. Turning custom datasets into DataLoaders: "RuntimeError: DataLoader worker" #107.Beta Was this translation helpful? Give feedback.
All reactions