Skip to content

Commit 508b700

Browse files
committed
regenerate tutorials
1 parent e594f3e commit 508b700

File tree

57 files changed

+2647
-2568
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

57 files changed

+2647
-2568
lines changed

__site/advanced/ensembles-3/index.html

+65-63
Large diffs are not rendered by default.

__site/advanced/stacking/index.html

+67-65
Large diffs are not rendered by default.

__site/assets/literate/advanced/ensembles-3/tutorial.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ networks.
1313

1414
Learning networks are an advanced MLJ feature which are covered in detail, with
1515
examples, in the [Learning
16-
networks](https://JuliaAI.github.io/MLJ.jl/dev/learning_networks/) section
16+
networks](https://alan-turing-institute.github.io/MLJ.jl/dev/learning_networks/) section
1717
of the manual. In the "Ensemble" and "Ensemble (2)" tutorials it is shown how to create
1818
and apply homogeneous ensembles using MLJ's built-in `EnsembleModel` wrapper. To provide
1919
a simple illustration of learning networks we show how a user could build their own
@@ -24,7 +24,7 @@ selection of features in each split of a decision tree).
2424
For a more advanced illustration, see the "Stacking" tutorial.
2525

2626
Some familiarity with the early parts of [Learning networks by
27-
example](https://JuliaAI.github.io/MLJ.jl/dev/learning_networks/#Learning-networks-by-example)
27+
example](https://alan-turing-institute.github.io/MLJ.jl/dev/learning_networks/#Learning-networks-by-example)
2828
will be helpful, but is not essential.
2929

3030
@@dropdown

__site/assets/literate/advanced/stacking/tutorial.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -17,13 +17,13 @@ intensive. Nevertheless, stacking has been used successfully by teams in data sc
1717
science competitions.
1818

1919
For routine stacking tasks the MLJ user should use the `Stack` model documented
20-
[here](https://JuliaAI.github.io/MLJ.jl/dev/composing_models/#Model-Stacking). Internally,
20+
[here](https://alan-turing-institute.github.io/MLJ.jl/dev/composing_models/#Model-Stacking). Internally,
2121
`Stack` is implemented using MLJ's learning networks feature, and the purpose of this
2222
tutorial give an advanced illustration of MLJ learning networks by presenting a
2323
simplified version of this implementation. Familiarity with model stacking is not
2424
essential, but we assume the reader is already familiar with learning network basics, as
2525
illustrated in the [Learning
26-
networks](https://JuliaAI.github.io/MLJ.jl/dev/learning_networks/) section
26+
networks](https://alan-turing-institute.github.io/MLJ.jl/dev/learning_networks/) section
2727
of the MLJ manual. The "Ensembles (learning networks)" tutorial also gives a simple
2828
illustration.
2929

__site/assets/literate/data/processing/tutorial.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@ first(capacity, 5)
8181

8282
This dataframe contains several subgroups (country and technology type) and it would be interesting to get data aggregates by subgroup.
8383
To obtain a `view` of the DataFrame by subgroup, we can use the `groupby` function.
84-
(See the [DataFrame tutorial](https://JuliaAI.github.io/DataScienceTutorials.jl/data/dataframe/#groupby) for an introduction to the use of `groupby`)
84+
(See the [DataFrame tutorial](https://alan-turing-institute.github.io/DataScienceTutorials.jl/data/dataframe/#groupby) for an introduction to the use of `groupby`)
8585

8686
````julia:ex9
8787
cap_gr = groupby(capacity, [:country, :primary_fuel]);

__site/assets/literate/end-to-end/airfoil/tutorial.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ Can we do even better? Yeah, we can!! We can make use of Model Tuning.
151151
@@
152152
@@dropdown-content
153153

154-
In case you are new to model tuning using MLJ, refer [lab5](https://JuliaAI.github.io/DataScienceTutorials.jl/isl/lab-5/) and [model-tuning](https://JuliaAI.github.io/DataScienceTutorials.jl/getting-started/model-tuning/)
154+
In case you are new to model tuning using MLJ, refer [lab5](https://alan-turing-institute.github.io/DataScienceTutorials.jl/isl/lab-5/) and [model-tuning](https://alan-turing-institute.github.io/DataScienceTutorials.jl/getting-started/model-tuning/)
155155

156156
Range of values for parameters should be specified to do hyperparameter tuning
157157

__site/assets/literate/end-to-end/boston-flux/tutorial.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ nnregressor = MLJFlux.NeuralNetworkRegressor(builder=myregressor, epochs=10)
103103
````
104104

105105
Other parameters that NeuralNetworkRegressor takes can be found here:
106-
https://github.com/JuliaAI/MLJFlux.jl#model-hyperparameters
106+
https://github.com/alan-turing-institute/MLJFlux.jl#model-hyperparameters
107107

108108
`nnregressor` now acts like any other MLJ model. Let's try wrapping it in a
109109
MLJ machine and calling `fit!, predict`.

__site/assets/literate/end-to-end/horse/tutorial.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ data; to facilitate the interpretation, we can use `autotype` from `ScientificTy
7979
By default, `autotype` will check all columns and suggest a Finite type assuming there
8080
are relatively few distinct values in the column. More sophisticated rules can be
8181
passed, see
82-
[ScientificTypes.jl](https://JuliaAI.github.io/ScientificTypes.jl/dev/):
82+
[ScientificTypes.jl](https://alan-turing-institute.github.io/ScientificTypes.jl/dev/):
8383

8484
````julia:ex5
8585
coerce!(data, autotype(data));

__site/assets/literate/end-to-end/powergen/tutorial.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ schema(data)
138138
````
139139

140140
It is important that the scientific type of the variables corresponds to one of the types allowed for use with the models you are planning to use.
141-
(For more guidance on this, see the [Scientific Type](https://JuliaAI.github.io/DataScienceTutorials.jl/data/scitype/) tutorial.
141+
(For more guidance on this, see the [Scientific Type](https://alan-turing-institute.github.io/DataScienceTutorials.jl/data/scitype/) tutorial.
142142
The scientific type of both `Wind_gen` and `Solar_gen` is currently `Count`. Let's coerce them to `Continuous`.
143143

144144
````julia:ex13

__site/assets/literate/end-to-end/telco/tutorial.md

+12-12
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ end;
99
````
1010

1111
An application of the [MLJ
12-
toolbox](https://JuliaAI.github.io/MLJ.jl/dev/) to the
12+
toolbox](https://alan-turing-institute.github.io/MLJ.jl/dev/) to the
1313
Telco Customer Churn dataset, aimed at practicing data scientists
1414
new to MLJ (Machine Learning in Julia). This tutorial does not
1515
cover exploratory data analysis.
@@ -18,9 +18,9 @@ MLJ is a general machine learning toolbox (i.e., not just
1818
deep-learning).
1919

2020
For other MLJ learning resources see the [Learning
21-
MLJ](https://JuliaAI.github.io/MLJ.jl/dev/learning_mlj/)
21+
MLJ](https://alan-turing-institute.github.io/MLJ.jl/dev/learning_mlj/)
2222
section of the
23-
[manual](https://JuliaAI.github.io/MLJ.jl/dev/).
23+
[manual](https://alan-turing-institute.github.io/MLJ.jl/dev/).
2424

2525
**Topics covered**: Grabbing and preparing a dataset, basic
2626
fit/predict workflow, constructing a pipeline to include data
@@ -94,7 +94,7 @@ introducing the main actors in any MLJ workflow. Details that you
9494
don't fully grasp should become clearer in the Telco study.
9595

9696
This section is a condensed adaption of the [Getting Started
97-
example](https://JuliaAI.github.io/MLJ.jl/dev/getting_started/#Fit-and-predict)
97+
example](https://alan-turing-institute.github.io/MLJ.jl/dev/getting_started/#Fit-and-predict)
9898
in the MLJ documentation.
9999

100100
First, using the built-in iris dataset, we load and inspect the features
@@ -145,7 +145,7 @@ fitted_params(mach)
145145
````
146146

147147
A machine stores some other information enabling [warm
148-
restart](https://JuliaAI.github.io/MLJ.jl/dev/machines/#Warm-restarts)
148+
restart](https://alan-turing-institute.github.io/MLJ.jl/dev/machines/#Warm-restarts)
149149
for some models, but we won't go into that here. You are allowed to
150150
access and mutate the `model` parameter:
151151

@@ -360,7 +360,7 @@ ytest, Xtest = unpack(df_test, ==(:Churn), !=(:customerID));
360360
*Introduces:* `@load`, `input_scitype`, `target_scitype`
361361

362362
For tools helping us to identify suitable models, see the [Model
363-
Search](https://JuliaAI.github.io/MLJ.jl/dev/model_search/#model_search)
363+
Search](https://alan-turing-institute.github.io/MLJ.jl/dev/model_search/#model_search)
364364
section of the manual. We will build a gradient tree-boosting model,
365365
a popular first choice for structured data like we have here. Model
366366
code is contained in a third-party package called
@@ -428,7 +428,7 @@ pipe = ContinuousEncoder() |> booster
428428

429429
Note that the component models appear as hyper-parameters of
430430
`pipe`. Pipelines are an implementation of a more general [model
431-
composition](https://JuliaAI.github.io/MLJ.jl/dev/composing_models/#Composing-Models)
431+
composition](https://alan-turing-institute.github.io/MLJ.jl/dev/composing_models/#Composing-Models)
432432
interface provided by MLJ that advanced users may want to learn about.
433433

434434
From the above display, we see that component model hyper-parameters
@@ -622,7 +622,7 @@ observation space, for a total of 18 folds) and set
622622
`acceleration=CPUThreads()` to parallelize the computation.
623623

624624
We choose a `StratifiedCV` resampling strategy; the complete list of options is
625-
[here](https://JuliaAI.github.io/MLJ.jl/dev/evaluating_model_performance/#Built-in-resampling-strategies).
625+
[here](https://alan-turing-institute.github.io/MLJ.jl/dev/evaluating_model_performance/#Built-in-resampling-strategies).
626626

627627
````julia:ex49
628628
e_pipe = evaluate(pipe, X, y,
@@ -692,7 +692,7 @@ eg, the neural network models provided by
692692
[MLJFlux.jl](https://github.com/FluxML/MLJFlux.jl).
693693

694694
First, we select appropriate controls from [this
695-
list](https://JuliaAI.github.io/MLJ.jl/dev/controlling_iterative_models/#Controls-provided):
695+
list](https://alan-turing-institute.github.io/MLJ.jl/dev/controlling_iterative_models/#Controls-provided):
696696

697697
````julia:ex51
698698
controls = [
@@ -751,7 +751,7 @@ here is the `learning_curve` function, which can be useful when
751751
wanting to visualize the effect of changes to a *single*
752752
hyper-parameter (which could be an iteration parameter). See, for
753753
example, [this section of the
754-
manual](https://JuliaAI.github.io/MLJ.jl/dev/learning_curves/)
754+
manual](https://alan-turing-institute.github.io/MLJ.jl/dev/learning_curves/)
755755
or [this
756756
tutorial](https://github.com/ablaom/MLJTutorial.jl/blob/dev/notebooks/04_tuning/notebook.ipynb).
757757

@@ -791,7 +791,7 @@ Nominal ranges are defined by specifying `values` instead of `lower`
791791
and `upper`.
792792

793793
Next, we choose an optimization strategy from [this
794-
list](https://JuliaAI.github.io/MLJ.jl/dev/tuning_models/#Tuning-Models):
794+
list](https://alan-turing-institute.github.io/MLJ.jl/dev/tuning_models/#Tuning-Models):
795795

796796
````julia:ex56
797797
tuning = RandomSearch(rng=rng)
@@ -880,7 +880,7 @@ savefig(joinpath(@OUTPUT, "EX-telco-tuning.svg")); # hide
880880
Here's how to serialize our final, trained self-iterating,
881881
self-tuning pipeline machine using Julia's native serializer (see
882882
[the
883-
manual](https://JuliaAI.github.io/MLJ.jl/dev/machines/#Saving-machines)
883+
manual](https://alan-turing-institute.github.io/MLJ.jl/dev/machines/#Saving-machines)
884884
for more options):
885885

886886
````julia:ex63

__site/assets/literate/getting-started/fit-predict/tutorial.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ Pkg.activate("_literate/getting-started/fit-predict/Project.toml")
55
Pkg.instantiate()
66
````
77

8-
[MLJ.jl]: https://github.com/JuliaAI/MLJ.jl
8+
[MLJ.jl]: https://github.com/alan-turing-institute/MLJ.jl
99
[RDatasets.jl]: https://github.com/JuliaStats/RDatasets.jl
1010
[DecisionTree.jl]: https://github.com/bensadeghi/DecisionTree.jl
1111

__site/assets/literate/getting-started/model-choice/tutorial.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ Pkg.activate("_literate/getting-started/model-choice/Project.toml")
55
Pkg.instantiate()
66
````
77

8-
[MLJ.jl]: https://github.com/JuliaAI/MLJ.jl
8+
[MLJ.jl]: https://github.com/alan-turing-institute/MLJ.jl
99
[RDatasets.jl]: https://github.com/JuliaStats/RDatasets.jl
10-
[MLJModels.jl]: https://github.com/JuliaAI/MLJModels.jl
10+
[MLJModels.jl]: https://github.com/alan-turing-institute/MLJModels.jl
1111
[DecisionTree.jl]: https://github.com/bensadeghi/DecisionTree.jl
1212
[NearestNeighbors.jl]: https://github.com/KristofferC/NearestNeighbors.jl
1313
[GLM.jl]: https://github.com/JuliaStats/GLM.jl

__site/assets/literate/getting-started/model-tuning/tutorial.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ macro OUTPUT()
88
end
99
````
1010

11-
[MLJ.jl]: https://github.com/JuliaAI/MLJ.jl
11+
[MLJ.jl]: https://github.com/alan-turing-institute/MLJ.jl
1212
[RDatasets.jl]: https://github.com/JuliaStats/RDatasets.jl
1313
[NearestNeighbors.jl]: https://github.com/KristofferC/NearestNeighbors.jl
1414

__site/assets/literate/isl/lab-5/tutorial.md

+4
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,10 @@ Note the use of `rng=` to seed the shuffling of indices so that the results are
3030
@@
3131
@@dropdown-content
3232

33+
This tutorial introduces polynomial regression in a very hands-on way. A more
34+
programmatic alternative is to use MLJ's `InteractionTransformer`. Run
35+
`doc("InteractionTransformer")` for details.
36+
3337
````julia:ex3
3438
LR = @load LinearRegressor pkg=MLJLinearModels
3539
````

__site/assets/literate/isl/lab-6b/tutorial.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ To coerce `int` features to `Float`, we nest the `autotype` function in the `coe
119119
function. The `autotype` function returns a dictionary containing scientific types,
120120
which is then passed to the `coerce` function. For more details on the use of
121121
`autotype`, see the [Scientific
122-
Types](https://JuliaAI.github.io/DataScienceTutorials.jl/data/scitype/index.html#autotype)
122+
Types](https://alan-turing-institute.github.io/DataScienceTutorials.jl/data/scitype/index.html#autotype)
123123

124124
````julia:ex9
125125
Xc = coerce(X, autotype(X, rules = (:discrete_to_continuous,)))

0 commit comments

Comments
 (0)