Skip to content

Commit d1cd117

Browse files
authored
[torch-mlir] remove trailing whitespace from md documentation (llvm#2853)
1 parent 24b8c86 commit d1cd117

6 files changed

+26
-26
lines changed

docs/add_ops.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,17 @@
11
# How to Add Ops to Torch-Mlir
22

3-
Collected links and contacts for how to add ops to torch-mlir.
3+
Collected links and contacts for how to add ops to torch-mlir.
44

55

66
<details>
77
<summary>Turbine Camp: Start Here</summary>
8-
This document was previously known as `turbine-camp.md` to Nod.ai. "Turbine Camp" is part of Nod.ai's onboarding process. Welcome to turbine camp. This document originated at Nod.ai as a part of onboardding process, where new nod-ai folks learn about the architecture of our work by adding support for 2 ops to torch-mlir. I decided to put this into torch mlir because a lot of this is about torch-mlir.
8+
This document was previously known as `turbine-camp.md` to Nod.ai. "Turbine Camp" is part of Nod.ai's onboarding process. Welcome to turbine camp. This document originated at Nod.ai as a part of onboardding process, where new nod-ai folks learn about the architecture of our work by adding support for 2 ops to torch-mlir. I decided to put this into torch mlir because a lot of this is about torch-mlir.
99

1010
Written & maintained by @renxida
1111

1212
Guides by other folks that were used during the creation of this document:
1313
- [Chi Liu](https://gist.github.com/AmosLewis/dd31ab37517977b1c499d06495b4adc2)
14-
- [Sunsoon](https://docs.google.com/document/d/1H79DwW_wnVzUU81EogwY5ueXgnl-QzKet1p2lnqPar4/edit?pli=1)
14+
- [Sunsoon](https://docs.google.com/document/d/1H79DwW_wnVzUU81EogwY5ueXgnl-QzKet1p2lnqPar4/edit?pli=1)
1515

1616
## Before you begin...
1717

docs/adding_abstract_interpretation_functions.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44

55
As part of adding support for a Torch operator in Torch-MLIR, it is usually
66
necessary to define a shape and dtype function so that the compiler can infer
7-
the shapes and dtypes of result tensors for the operator. We use the
7+
the shapes and dtypes of result tensors for the operator. We use the
88
[abstract interpretation library](abstract_interp_lib.md) for this process.
99

1010
## Step-by-step guide
@@ -19,7 +19,7 @@ We will use the example of adding support for the `torch.aten.tanh` op.
1919
file is the "rosetta stone" that allows translating between
2020
e.g. `torch.aten.tanh`, `AtenTanhOp`, and the shape and dtype
2121
function signatures are:
22-
22+
2323
- `def aten〇tanh〡shape(self: List[int]) -> List[int]:`
2424
- `def aten〇tanh〡dtype(self_rank_dtype: Tuple[int, int]) -> int:`
2525

@@ -39,10 +39,10 @@ We will use the example of adding support for the `torch.aten.tanh` op.
3939
But in general, you will need to write the function and test it
4040
(see the comments about "Shape, dtype, and decomposition function
4141
testing infrastructure" in `testing_framework.py`). New shape
42-
functions should be added upstream following the example of [this PR](https://github.com/pytorch/pytorch/pull/76889),
43-
though it can be useful to iterate locally in `abstract_interp_lib_gen.py`
42+
functions should be added upstream following the example of [this PR](https://github.com/pytorch/pytorch/pull/76889),
43+
though it can be useful to iterate locally in `abstract_interp_lib_gen.py`
4444
first.
45-
45+
4646
Similarly, dtype functions should ideally just be a call to the helper
4747
`promote_dtypes` defined in `library_generator.py`. However, some ops will
4848
require some extra logic to calculate the right result types. While dtypes

docs/architecture.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -442,5 +442,5 @@ characteristics.
442442

443443
### Presentations and Talks
444444

445-
* 2021-10-07: MLIR ODM: Introduction to Torch-MLIR. ([recording](https://www.youtube.com/watch?v=QbNkex-gizs) and [slides](https://docs.google.com/presentation/d/1ZhzfE4EK6XV7AdQTYicrsE_OYjkER_yiB0vBeszRfzY/edit#slide=id.gf56404f79c_1_55))
446-
* 2022-08-20: Overview of Torch-MLIR passes. ([recording](https://www.youtube.com/watch?v=ZpwlVxsD9_U) and [slides](https://drive.google.com/file/d/1ZSlk1HGttRuVhJSxtP6spWt_hxClit2T/view))
445+
* 2021-10-07: MLIR ODM: Introduction to Torch-MLIR. ([recording](https://www.youtube.com/watch?v=QbNkex-gizs) and [slides](https://docs.google.com/presentation/d/1ZhzfE4EK6XV7AdQTYicrsE_OYjkER_yiB0vBeszRfzY/edit#slide=id.gf56404f79c_1_55))
446+
* 2022-08-20: Overview of Torch-MLIR passes. ([recording](https://www.youtube.com/watch?v=ZpwlVxsD9_U) and [slides](https://drive.google.com/file/d/1ZSlk1HGttRuVhJSxtP6spWt_hxClit2T/view))

docs/importers/onnx_importer.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,8 @@ for the reference importer which complies with the rules below.
1111
With the exception of certain special or complicated ONNX operators, most
1212
are relatively straight-forward to map, following this general procedure:
1313

14-
* Plan the ops you wish to support by consulting the
15-
[ONNX operator database](https://onnx.ai/onnx/operators/).
14+
* Plan the ops you wish to support by consulting the
15+
[ONNX operator database](https://onnx.ai/onnx/operators/).
1616
* This database has detailed diffs wrt different support versions but
1717
at the level of detail we operate, most version diffs are inconsequential
1818
and just require a bit more pattern support.
@@ -24,7 +24,7 @@ are relatively straight-forward to map, following this general procedure:
2424
corresponding with the alphabetic sort of the op and add a conversion.
2525
* Generate successful test cases:
2626
* All `onnx_importer.py` tests are dumped to the test temp dir (success
27-
or failure). This is typically located under
27+
or failure). This is typically located under
2828
`tools/torch-mlir/test/python/onnx_importer/Output`. The `.mlir` files
2929
under there should provide good variants to drive lit test coverage of
3030
conversion.
@@ -34,25 +34,25 @@ are relatively straight-forward to map, following this general procedure:
3434
* There are often many variants of tests for checking conformance of
3535
different historic ONNX encodings, but these are often not load bearing
3636
at the MLIR level.
37-
* Pick a handful of test cases and add them to
37+
* Pick a handful of test cases and add them to
3838
`test/Conversion/TorchOnnxToTorch/simple_ops_x_to_y.mlir` corresponding to
3939
an alphabetic breakdown. At this time, ignore tests that are not exercising
4040
useful differences in the pattern implementations.
41-
* (Optionally) Use `torch-mlir-opt` to validate the outputs of the new op.
42-
First, build the project using
41+
* (Optionally) Use `torch-mlir-opt` to validate the outputs of the new op.
42+
First, build the project using
4343
`cmake --build build --target tools/torch-mlir/all`. This will generate
4444
the conversion binary, `torch-mlir-opt`. Then call `torch-mlir-opt` with
4545
the MLIR pass `convert-torch-onnx-to-torch`:
4646
```
4747
build/bin/torch-mlir-opt -convert-torch-onnx-to-torch \
4848
-split-input-file [DESIRED_ONNX_FILE].mlir
49-
```
49+
```
5050
* Generate failure test cases:
5151
* Some ops have forms that do not (easily) map to torch-mlir. If you leave
5252
an op under-implemented, add a failing test case to
5353
`test/Conversion/TorchOnnxToTorch/unsupported_simple_ops.mlir`.
54-
* Optional but recommended: Use your test case files to fuzz against the
55-
torch-mlir backend of your choice by running a backend conversion pipeline
54+
* Optional but recommended: Use your test case files to fuzz against the
55+
torch-mlir backend of your choice by running a backend conversion pipeline
5656
and fixing any crashes/issues.
5757
* Send a patch with your changes.
5858
@@ -115,7 +115,7 @@ not yet implemented.
115115
The `IsolatedFromAbove` parent of the ops can contain the following
116116
metadata:
117117
118-
* `torch.onnx_meta.ir_version`: 64bit `IntegerAttr` corresponding to
118+
* `torch.onnx_meta.ir_version`: 64bit `IntegerAttr` corresponding to
119119
`ModelProto.ir_version`.
120120
* `torch.onnx_meta.producer_name`: `StringAttr` corresponding to
121121
`ModelProto.producer_name`.
@@ -135,7 +135,7 @@ are only minor variations of an op. Major variations should use
135135
136136
### Special op forms
137137
138-
Certain ONNX operators map to different structural components of
138+
Certain ONNX operators map to different structural components of
139139
torch-mlir's representation:
140140
141141
* `ConstantOfShape`: Mapped to `torch.vtensor.literal` with

docs/ltc_backend.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@ At some point, the tensors will be synced in order to execute the computation --
103103
>>> torch._lazy.mark_step()
104104
```
105105

106-
This triggers a call to `LazyGraphExecutor::SyncLiveTensorsGraph` somewhere in the guts of LTC, which collects all the `TorchMlirNode`s (technically `torch::lazy::Node`s at this point) from the current trace and
106+
This triggers a call to `LazyGraphExecutor::SyncLiveTensorsGraph` somewhere in the guts of LTC, which collects all the `TorchMlirNode`s (technically `torch::lazy::Node`s at this point) from the current trace and
107107
creates an instance of `TorchMlirLoweringContext`. Here, the `TorchMlirNode`s are lowered to JIT via `mlir_node_lowering.cpp` and inserted into a `jit::Graph`.
108108

109109
Next, `TorchMlirLoweringContext::Build` is executed and the final `jit::Graph` is sent to `torch_mlir::importJitFunctionAsFuncOp` to generate MLIR using the existing infrastructure from Torch-MLIR.
@@ -121,7 +121,7 @@ Finally, the compiled computation is sent to `TorchMlirBackendImpl::ExecuteCompu
121121

122122
## Implementing a custom backend
123123

124-
A reference implementation of a custom backend is available [here](../python/torch_mlir/csrc/reference_lazy_backend/).
124+
A reference implementation of a custom backend is available [here](../python/torch_mlir/csrc/reference_lazy_backend/).
125125
All the work involved with generating MLIR is handled in the base LTC backend, so vendors only need to worry about implementing `Compile`, `ExecuteComputation`, and some other minor methods to interface with the device.
126126

127127
A pybind is needed to invoke C++ code to register the autogen PyTorch kernels and the custom backend itself.

docs/ltc_examples.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -33,18 +33,18 @@ Received 1 arguments, and returned 2 results during ExecuteCompile!
3333
3434
Results: tensor([[0.7616, 0.9640, 0.9951, 0.9993, 0.9999]], device='lazy:0')
3535
36-
JIT Graph:
36+
JIT Graph:
3737
graph(%p0 : Float(1, 5)):
3838
%1 : Float(1, 5) = aten::tanh(%p0)
3939
return (%p0, %1)
4040
41-
MLIR:
41+
MLIR:
4242
func.func @graph(%arg0: !torch.vtensor<[1,5],f32>) -> (!torch.vtensor<[1,5],f32>, !torch.vtensor<[1,5],f32>) {
4343
%0 = torch.aten.tanh %arg0 : !torch.vtensor<[1,5],f32> -> !torch.vtensor<[1,5],f32>
4444
return %arg0, %0 : !torch.vtensor<[1,5],f32>, !torch.vtensor<[1,5],f32>
4545
}
4646
47-
Input/Output Alias Mapping:
47+
Input/Output Alias Mapping:
4848
Output: 0 -> Input param: 0
4949
5050
In Mark Step: true

0 commit comments

Comments
 (0)