@@ -40,7 +40,7 @@ HPC tested:
4040We highly recommend using [ renv] ( https://rstudio.github.io/renv/index.html )
4141when working with an HPC.
4242
43- ## Creating a New Workflow
43+ ## Creating a new workflow
4444
4545``` {r, eval = FALSE}
4646library(slurmworkflow)
@@ -65,7 +65,7 @@ Calling `create_workflow()` result in the creation of the *workflow directory*:
6565* workflow summary* is returned and stored in the ` wf ` variable. We'll use it to
6666add elements to the workflow.
6767
68- ## Adding a Step to the Workflow
68+ ## Adding a step to the workflow
6969
7070The first step that we use on most of our * workflows* ensures that our local
7171project and the HPC are in sync.
@@ -125,7 +125,7 @@ setup_lines <- c(
125125)
126126```
127127
128- ### Run Code From an R Script
128+ ### Run code from an R script
129129
130130Our next step will run the following script on the HPC.
131131
@@ -185,7 +185,7 @@ As before we use the `add_workflow_step()` function. But we change the
185185For the ` sbatch ` options, we ask here for 1 CPU, 4GB of RAM and a maximum of 10
186186minutes.
187187
188- ### Iterating Over Values in an R Script
188+ ### Iterating over values in an R script
189189
190190One common task on an HPC is to run the same code many time and only vary the
191191value of some arguments.
@@ -276,7 +276,7 @@ jobs where each job is a set of around 30 parallel simulations. Therefore, we
276276here have 2 levels of parallelization. One in
277277[ slurm] ( https://slurm.schedmd.com/ ) and one in the script itself.
278278
279- ### Running an R Function Directly
279+ ### Running an R function directly
280280
281281Sometimes we want to run a simple function directly without storing it into an
282282R script. The ` step_tmpl_do_call() ` and ` step_tmpl_map() ` do exactly that for
@@ -313,15 +313,15 @@ Finally, as this will be our last step, we override the `mail-type`
313313` sbatch_opts ` to receive a mail when this * step* finishes, whatever the outcome.
314314This way we receive a mail telling us that the * workflow* is finished.
315315
316- ## Using the Workflow on an HPC
316+ ## Using the workflow on an HPC
317317
318318Now that our workflow is created how to actually run the code on the HPC?
319319
320320We assume that we are working on a project called "test_proj", that this
321321project was cloned on the HPC at the following path: "~ /projects/test_proj" and
322322that the "~ /projects/test_proj/workflows/" directory exists.
323323
324- ### Sending the Workflow to the HPC
324+ ### Sending the workflow to the HPC
325325
326326The following commands are to be run from your local computer.
327327
@@ -346,7 +346,7 @@ RStudio terminal.
346346Note that it's ` workflows\networks_estimation ` . Windows uses back-slashes for
347347directories and Unix OSes uses forward-slashes.
348348
349- #### Running the Workflow From the HPC
349+ #### Running the workflow from the HPC
350350
351351For this step, you must be at the command line on the HPC. This means that you
352352have run: ` ssh <user>@clogin01.sph.emory.edu ` from your local computer.
@@ -382,7 +382,7 @@ You can check the state of your running workflow as usual with `squeue -u <user>
382382
383383The logs for the workflows are in "workflows/test_slurmworkflow/log/".
384384
385- ### The "start_workflow.sh" Script
385+ ### The "start_workflow.sh" script
386386
387387This start script additionally allows you to start a workflow at a specific
388388step with the ` -s ` argument.
0 commit comments