Skip to content

Commit cc4b720

Browse files
committed
update
1 parent a82fa09 commit cc4b720

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

70 files changed

+2687
-2449
lines changed

README.md

+29-19
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Learn Deep Learning with PyTorch
1313
书中已经详细给出了如何基于Anaconda配置python环境,以及PyTorch的安装,如果你使用自己的电脑,并且有Nvidia的显卡,那么你可以愉快地进入深度学习的世界了,如果你没有Nvidia的显卡,那么我们需要一个云计算的平台来帮助我们学习深度学习之旅。[如何配置aws计算平台](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/aws.md)
1414

1515

16-
**以下的课程目录和书中目录有出入,因为内容正在不断更新,所有的内容更新完成会更迭到书的第二版中**
16+
**以下的课程目录和书中目录有出入,因为内容正在更新到第二版,第二版即将上线!**
1717
## 课程目录
1818
### part1: 深度学习基础
1919
- Chapter 2: PyTorch基础
@@ -28,6 +28,14 @@ Learn Deep Learning with PyTorch
2828
- [多层神经网络,Sequential 和 Module](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/nn-sequential-module.ipynb)
2929
- [深度神经网络](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/deep-nn.ipynb)
3030
- [参数初始化方法](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/param_initialize.ipynb)
31+
32+
- 优化算法
33+
- [SGD](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/sgd.ipynb)
34+
- [动量法](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/momentum.ipynb)
35+
- [Adagrad](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/adagrad.ipynb)
36+
- [RMSProp](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/rmsprop.ipynb)
37+
- [Adadelta](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/adadelta.ipynb)
38+
- [Adam](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/adam.ipynb)
3139

3240
- Chapter 4: 卷积神经网络
3341
- [PyTorch 中的卷积模块](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/basic_conv.ipynb)
@@ -42,42 +50,44 @@ Learn Deep Learning with PyTorch
4250
- [学习率衰减](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/lr-decay.ipynb)
4351

4452
- Chapter 5: 循环神经网络
45-
- LSTM 和 GRU
46-
- 使用RNN进行时间序列分析
47-
- 使用RNN进行图像分类
48-
- Word Embedding和N-Gram模型
49-
- Seq-LSTM做词性预测
53+
- [循环神经网络模块:LSTM 和 GRU](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter5_RNN/pytorch-rnn.ipynb)
54+
- [使用 RNN 进行图像分类](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter5_RNN/rnn-for-image.ipynb)
55+
- 使用 RNN 进行时间序列分析
56+
- 自然语言处理的应用:
57+
- Word Embedding
58+
- N-Gram 模型
59+
- Seq-LSTM 做词性预测
5060

5161
- Chapter 6: 生成对抗网络
5262
- 自动编码器
5363
- 变分自动编码器
5464
- 生成对抗网络的介绍
5565
- 深度卷积对抗网络(DCGANs)
5666

57-
- Chapter 7: PyTorch高级
58-
- [tensorboard 可视化](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/tensorboard.ipynb)
59-
- 优化算法
60-
- [SGD](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/sgd.ipynb)
61-
- [动量法](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/momentum.ipynb)
62-
- [Adagrad](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/adagrad.ipynb)
63-
- [RMSProp](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/rmsprop.ipynb)
64-
- [Adadelta](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/adadelta.ipynb)
65-
- [Adam](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/adam.ipynb)
66-
- [灵活的数据读取介绍](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/data-io.ipynb)
67+
- Chapter 7: 深度增强学习
68+
- 深度增强学习的介绍
69+
- Policy gradient
70+
- Actor-critic gradient
71+
- Deep Q-networks
72+
73+
- Chapter 8: PyTorch高级
74+
- [tensorboard 可视化](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter8_PyTorch-Advances/tensorboard.ipynb)
75+
76+
- [灵活的数据读取介绍](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter8_PyTorch-Advances/data-io.ipynb)
6777
- autograd.function 的介绍
6878
- 数据并行和多 GPU
6979
- PyTorch 的分布式应用
7080
- 使用 ONNX 转化为 Caffe2 模型
7181
- PyTorch 写 C 扩展
7282

7383
### part2: 深度学习的应用
74-
- Chapter 8: 计算机视觉
75-
- [Fine-tuning: 通过微调进行迁移学习](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter7_Computer-Vision/fine-tune.ipynb)
84+
- Chapter 9: 计算机视觉
85+
- [Fine-tuning: 通过微调进行迁移学习](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter9_Computer-Vision/fine-tune.ipynb)
7686
- 语义分割: 通过 FCN 实现像素级别的分类
7787
- Neural Transfer: 通过卷积网络实现风格迁移
7888
- Deep Dream: 探索卷积网络眼中的世界
7989

80-
- Chapter 9: 自然语言处理
90+
- Chapter 10: 自然语言处理
8191
- char rnn 实现文本生成
8292
- Image Caption: 实现图片字幕生成
8393
- seq2seq 实现机器翻译

chapter2_PyTorch-Basics/Tensor-and-Variable.ipynb

+2-57
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,7 @@
1111
"cell_type": "markdown",
1212
"metadata": {},
1313
"source": [
14-
"## 把PyTorch当做NumPy用\n",
15-
"PyTorch的官方介绍是一个拥有强力GPU加速的张量和动态构建网络的库,其主要构件是张量,所以我们可以把PyTorch当做NumPy来用,PyTorch的很多操作好NumPy都是类似的,但是因为其能够在GPU上运行,所以有着比NumPy快很多倍的速度。"
14+
"## 把PyTorch当做NumPy用"
1615
]
1716
},
1817
{
@@ -58,27 +57,13 @@
5857
"pytorch_tensor2 = torch.from_numpy(numpy_tensor)"
5958
]
6059
},
61-
{
62-
"cell_type": "markdown",
63-
"metadata": {},
64-
"source": [
65-
"使用以上两种方法进行转换的时候,会直接将 NumPy ndarray 的数据类型转换为对应的 PyTorch Tensor 数据类型,关于 Tensor 的数据类型,可以查阅[文档](http://pytorch.org/docs/0.3.0/tensors.html)"
66-
]
67-
},
6860
{
6961
"cell_type": "markdown",
7062
"metadata": {},
7163
"source": [
7264
"\n"
7365
]
7466
},
75-
{
76-
"cell_type": "markdown",
77-
"metadata": {},
78-
"source": [
79-
"同时我们也可以使用下面的方法将 pytroch tensor 转换为 numpy ndarray"
80-
]
81-
},
8267
{
8368
"cell_type": "code",
8469
"execution_count": 4,
@@ -94,29 +79,13 @@
9479
"numpy_array = pytorch_tensor1.cpu().numpy()"
9580
]
9681
},
97-
{
98-
"cell_type": "markdown",
99-
"metadata": {},
100-
"source": [
101-
"需要注意 GPU 上的 Tensor 不能直接转换为 NumPy ndarray,需要使用`.cpu()`先将 GPU 上的 Tensor 转到 CPU 上"
102-
]
103-
},
10482
{
10583
"cell_type": "markdown",
10684
"metadata": {},
10785
"source": [
10886
"\n"
10987
]
11088
},
111-
{
112-
"cell_type": "markdown",
113-
"metadata": {},
114-
"source": [
115-
"PyTorch Tensor 使用 GPU 加速\n",
116-
"\n",
117-
"我们可以使用以下两种方式将 Tensor 放到 GPU 上"
118-
]
119-
},
12089
{
12190
"cell_type": "code",
12291
"execution_count": null,
@@ -134,15 +103,6 @@
134103
"gpu_tensor = torch.randn(10, 20).cuda(1) # 将 tensor 放到第二个 GPU 上"
135104
]
136105
},
137-
{
138-
"cell_type": "markdown",
139-
"metadata": {},
140-
"source": [
141-
"使用第一种方式将 tensor 放到 GPU 上的时候会将数据类型转换成定义的类型,而是用第二种方式能够直接将 tensor 放到 GPU 上,类型跟之前保持一致\n",
142-
"\n",
143-
"推荐在定义 tensor 的时候就明确数据类型,然后直接使用第二种方法将 tensor 放到 GPU 上"
144-
]
145-
},
146106
{
147107
"cell_type": "markdown",
148108
"metadata": {},
@@ -715,8 +675,7 @@
715675
"cell_type": "markdown",
716676
"metadata": {},
717677
"source": [
718-
"## Variable\n",
719-
"tensor 是 PyTorch 中的完美组件,但是构建神经网络还远远不够,我们需要能够构建计算图的 tensor,这就是 Variable。Variable 是对 tensor 的封装,操作和 tensor 是一样的,但是每个 Variabel都有三个属性,Variable 中的 tensor本身`.data`,对应 tensor 的梯度`.grad`以及这个 Variable 是通过什么方式得到的`.grad_fn`"
678+
"## Variable"
720679
]
721680
},
722681
{
@@ -782,13 +741,6 @@
782741
"print(z.grad_fn)"
783742
]
784743
},
785-
{
786-
"cell_type": "markdown",
787-
"metadata": {},
788-
"source": [
789-
"上面我们打出了 z 中的 tensor 数值,同时通过`grad_fn`知道了其是通过 Sum 这种方式得到的"
790-
]
791-
},
792744
{
793745
"cell_type": "code",
794746
"execution_count": 31,
@@ -836,13 +788,6 @@
836788
"print(x.grad)\n",
837789
"print(y.grad)"
838790
]
839-
},
840-
{
841-
"cell_type": "markdown",
842-
"metadata": {},
843-
"source": [
844-
"通过`.grad`我们得到了 x 和 y 的梯度,这里我们使用了 PyTorch 提供的自动求导机制,非常方便,下一小节会具体讲自动求导。"
845-
]
846791
}
847792
],
848793
"metadata": {

0 commit comments

Comments
 (0)