Skip to content

Commit da8ce81

Browse files
authored
Merge pull request #2016 from Saransh-cpp/linear-regression
A new linear regression tutorial
2 parents 8d948e8 + 25eea17 commit da8ce81

File tree

11 files changed

+414
-20
lines changed

11 files changed

+414
-20
lines changed

docs/Project.toml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,16 @@
11
[deps]
22
BSON = "fbb218c0-5317-5bc6-957e-2ee96dd4b1f0"
33
ChainRulesCore = "d360d2e6-b24c-11e9-a2a3-2a2ae2dbcce4"
4+
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
45
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
56
Functors = "d9f16b24-f501-4c13-a1f2-28368ffc5196"
7+
MLDatasets = "eb30cadb-4394-5ae3-aed4-317e484a6458"
68
MLUtils = "f1d291b0-491e-4a28-83b9-f70985020b54"
79
NNlib = "872c559c-99b0-510c-b3b7-b6c96a88d5cd"
810
OneHotArrays = "0b1bfda6-eb8a-41d2-88d8-f5af5cad476f"
911
Optimisers = "3bd65402-5787-11e9-1adc-39752487f4e2"
12+
Plots = "91a5bcdd-55d7-5caf-9e0b-520d859cae80"
13+
Statistics = "10745b16-79ce-11e8-11f9-7d13ad32a3b2"
1014
Zygote = "e88e6eb3-aa80-5325-afca-941959d7151f"
1115

1216
[compat]

docs/make.jl

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Statistics
1+
using Documenter, Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Plots, MLDatasets, Statistics, DataFrames
22

33

44
DocMeta.setdocmeta!(Flux, :DocTestSetup, :(using Flux); recursive = true)
55

66
makedocs(
7-
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Base, Statistics],
7+
modules = [Flux, NNlib, Functors, MLUtils, BSON, Optimisers, OneHotArrays, Zygote, ChainRulesCore, Base, Plots, MLDatasets, Statistics, DataFrames],
88
doctest = false,
99
sitename = "Flux",
1010
# strict = [:cross_references,],
@@ -41,11 +41,12 @@ makedocs(
4141
"Flat vs. Nested 📚" => "destructure.md",
4242
"Functors.jl 📚 (`fmap`, ...)" => "models/functors.md",
4343
],
44+
"Tutorials" => [
45+
"Linear Regression" => "tutorials/linear_regression.md",
46+
"Custom Layers" => "models/advanced.md", # TODO move freezing to Training
47+
],
4448
"Performance Tips" => "performance.md",
4549
"Flux's Ecosystem" => "ecosystem.md",
46-
"Tutorials" => [ # TODO, maybe
47-
"Custom Layers" => "models/advanced.md", # TODO move freezing to Training
48-
],
4950
],
5051
format = Documenter.HTML(
5152
sidebar_sitename = false,

docs/src/gpu.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ true
1717

1818
Support for array operations on other hardware backends, like GPUs, is provided by external packages like [CUDA](https://github.com/JuliaGPU/CUDA.jl). Flux is agnostic to array types, so we simply need to move model weights and data to the GPU and Flux will handle it.
1919

20-
For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](models/basics.md) on an NVIDIA GPU.
20+
For example, we can use `CUDA.CuArray` (with the `cu` converter) to run our [basic example](@ref man-basics) on an NVIDIA GPU.
2121

2222
(Note that you need to have CUDA available to use CUDA.CuArray – please see the [CUDA.jl](https://github.com/JuliaGPU/CUDA.jl) instructions for more details.)
2323

docs/src/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@ Other closely associated packages, also installed automatically, include [Zygote
1616

1717
## Learning Flux
1818

19-
The [quick start](models/quickstart.md) page trains a simple neural network.
19+
The [quick start](@ref man-quickstart) page trains a simple neural network.
2020

21-
This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](models/overview.md). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.
21+
This rest of this documentation provides a from-scratch introduction to Flux's take on models and how they work, starting with [fitting a line](@ref man-overview). Once you understand these docs, congratulations, you also understand [Flux's source code](https://github.com/FluxML/Flux.jl), which is intended to be concise, legible and a good reference for more advanced concepts.
2222

2323
Sections with 📚 contain API listings. The same text is avalable at the Julia prompt, by typing for example `?gpu`.
2424

docs/src/models/activation.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
1-
2-
# Activation Functions from NNlib.jl
1+
# [Activation Functions from NNlib.jl](@id man-activations)
32

43
These non-linearities used between layers of your model are exported by the [NNlib](https://github.com/FluxML/NNlib.jl) package.
54

docs/src/models/advanced.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Defining Customised Layers
1+
# [Defining Customised Layers](@id man-advanced)
22

33
Here we will try and describe usage of some more advanced features that Flux provides to give more control over model building.
44

@@ -34,7 +34,7 @@ For an intro to Flux and automatic differentiation, see this [tutorial](https://
3434

3535
## Customising Parameter Collection for a Model
3636

37-
Taking reference from our example `Affine` layer from the [basics](basics.md#Building-Layers-1).
37+
Taking reference from our example `Affine` layer from the [basics](@ref man-basics).
3838

3939
By default all the fields in the `Affine` type are collected as its parameters, however, in some cases it may be desired to hold other metadata in our "layers" that may not be needed for training, and are hence supposed to be ignored while the parameters are collected. With Flux, it is possible to mark the fields of our layers that are trainable in two ways.
4040

docs/src/models/functors.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Flux models are deeply nested structures, and [Functors.jl](https://github.com/F
44

55
New layers should be annotated using the `Functors.@functor` macro. This will enable [`params`](@ref Flux.params) to see the parameters inside, and [`gpu`](@ref) to move them to the GPU.
66

7-
`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](../models/advanced.md) page covers the use cases of `Functors` in greater details.
7+
`Functors.jl` has its own [notes on basic usage](https://fluxml.ai/Functors.jl/stable/#Basic-Usage-and-Implementation) for more details. Additionally, the [Advanced Model Building and Customisation](@ref man-advanced) page covers the use cases of `Functors` in greater details.
88

99
```@docs
1010
Functors.@functor

docs/src/training/optimisers.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ CurrentModule = Flux
44

55
# Optimisers
66

7-
Consider a [simple linear regression](../models/basics.md). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
7+
Consider a [simple linear regression](@ref man-linear-regression). We create some dummy data, calculate a loss, and backpropagate to calculate gradients for the parameters `W` and `b`.
88

99
```julia
1010
using Flux

docs/src/training/training.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -36,12 +36,12 @@ Flux.Optimise.train!
3636
```
3737

3838
There are plenty of examples in the [model zoo](https://github.com/FluxML/model-zoo), and
39-
more information can be found on [Custom Training Loops](../models/advanced.md).
39+
more information can be found on [Custom Training Loops](@ref man-advanced).
4040

4141
## Loss Functions
4242

43-
The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](../models/basics.md) will work as an objective.
44-
In addition to custom losses, a model can be trained in conjunction with
43+
The objective function must return a number representing how far the model is from its target – the *loss* of the model. The `loss` function that we defined in [basics](@ref man-basics) will work as an objective.
44+
In addition to custom losses, model can be trained in conjuction with
4545
the commonly used losses that are grouped under the `Flux.Losses` module.
4646
We can also define an objective in terms of some model:
4747

@@ -64,11 +64,11 @@ At first glance, it may seem strange that the model that we want to train is not
6464

6565
## Model parameters
6666

67-
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](../models/basics.md) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.
67+
The model to be trained must have a set of tracked parameters that are used to calculate the gradients of the objective function. In the [basics](@ref man-basics) section it is explained how to create models with such parameters. The second argument of the function `Flux.train!` must be an object containing those parameters, which can be obtained from a model `m` as `Flux.params(m)`.
6868

6969
Such an object contains a reference to the model's parameters, not a copy, such that after their training, the model behaves according to their updated values.
7070

71-
Handling all the parameters on a layer-by-layer basis is explained in the [Layer Helpers](../models/basics.md) section. For freezing model parameters, see the [Advanced Usage Guide](../models/advanced.md).
71+
Handling all the parameters on a layer by layer basis is explained in the [Layer Helpers](@ref man-basics) section. Also, for freezing model parameters, see the [Advanced Usage Guide](@ref man-advanced).
7272

7373
```@docs
7474
Flux.params

0 commit comments

Comments
 (0)