Skip to content

Commit 5b9369e

Browse files
authored
Fix typos according to reviewdog report. (#21335)
### Description Fix typos based on reviewdog report but with some exceptions/corrections.
1 parent 4e75605 commit 5b9369e

File tree

189 files changed

+380
-360
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

189 files changed

+380
-360
lines changed

.gitattributes

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# This sets the default behaviour, overriding core.autocrlf
1+
# This sets the default behavior, overriding core.autocrlf
22
* text=auto
33

44
# All source files should have unix line-endings in the repository,

ThirdPartyNotices.txt

+1-1
Original file line numberDiff line numberDiff line change
@@ -4820,7 +4820,7 @@ SOFTWARE.
48204820

48214821
----------------------------------------------------------------------------
48224822

4823-
This is the MIT/Expat Licence. For more information see:
4823+
This is the MIT/Expat License. For more information see:
48244824

48254825
1. http://www.opensource.org/licenses/mit-license.php
48264826

cmake/onnxruntime.cmake

+1-1
Original file line numberDiff line numberDiff line change
@@ -150,7 +150,7 @@ endif()
150150

151151
if(CMAKE_SYSTEM_NAME STREQUAL "Android" AND onnxruntime_MINIMAL_BUILD)
152152
# target onnxruntime is a shared library, the dummy __cxa_demangle is only attach to it to avoid
153-
# affecting downstream ort library users with the behaviour of dummy __cxa_demangle. So the dummy
153+
# affecting downstream ort library users with the behavior of dummy __cxa_demangle. So the dummy
154154
# __cxa_demangle must not expose to libonnxruntime_common.a. It works as when the linker is
155155
# creating the DSO, our dummy __cxa_demangle always comes before libc++abi.a so the
156156
# __cxa_demangle in libc++abi.a is discarded, thus, huge binary size reduction.

cmake/patches/composable_kernel/Fix_Clang_Build.patch

+1-1
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ index c23746e7f..bc326c8b5 100644
4444
find_package(HIP REQUIRED)
4545
# Override HIP version in config.h, if necessary.
4646
@@ -269,12 +248,6 @@ if( DEFINED CK_OVERRIDE_HIP_VERSION_PATCH )
47-
message(STATUS "CK_HIP_VERSION_PATCH overriden with ${CK_OVERRIDE_HIP_VERSION_PATCH}")
47+
message(STATUS "CK_HIP_VERSION_PATCH overridden with ${CK_OVERRIDE_HIP_VERSION_PATCH}")
4848
endif()
4949
message(STATUS "Build with HIP ${HIP_VERSION}")
5050
-link_libraries(hip::device)

csharp/ApiDocs/_exported_templates/default/partials/title.tmpl.partial

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ Event {{name.0.value}}
3939
Operator {{name.0.value}}
4040
{{/inOperator}}
4141
{{#inEii}}
42-
Explict Interface Implementation {{name.0.value}}
42+
Explicit Interface Implementation {{name.0.value}}
4343
{{/inEii}}
4444
{{#inVariable}}
4545
Variable {{name.0.value}}

dockerfiles/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@
3232
docker run -it onnxruntime-source
3333
```
3434

35-
The docker file supports both x86_64 and ARM64(aarch64). You may use docker's "--platform" parameter to explictly specify which CPU architecture you want to build. For example:
35+
The docker file supports both x86_64 and ARM64(aarch64). You may use docker's "--platform" parameter to explicitly specify which CPU architecture you want to build. For example:
3636

3737
```bash
3838
docker build --platform linux/arm64/v8 -f Dockerfile.source
@@ -274,7 +274,7 @@ Note: You may add --use_tensorrt and --tensorrt_home options if you wish to use
274274
Note: Resulting Docker image will have ONNX Runtime installed in /usr, and ONNX Runtime wheel copied to /onnxruntime directory.
275275
Nothing else from ONNX Runtime source tree will be copied/installed to the image.
276276

277-
Note: When running the container you built in Docker, please either use 'nvidia-docker' command instead of 'docker', or use Docker command-line options to make sure NVIDIA runtime will be used and appropiate files mounted from host. Otherwise, CUDA libraries won't be found. You can also [set NVIDIA runtime as default in Docker](https://github.com/dusty-nv/jetson-containers#docker-default-runtime).
277+
Note: When running the container you built in Docker, please either use 'nvidia-docker' command instead of 'docker', or use Docker command-line options to make sure NVIDIA runtime will be used and appropriate files mounted from host. Otherwise, CUDA libraries won't be found. You can also [set NVIDIA runtime as default in Docker](https://github.com/dusty-nv/jetson-containers#docker-default-runtime).
278278

279279
## MIGraphX
280280
**Ubuntu 20.04, ROCm6.0, MIGraphX**

docs/python/notebooks/onnx-inference-byoc-gpu-cpu-aks.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@
6464
"If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, please follow the [Azure ML configuration notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/configuration.ipynb) to set up your environment.\n",
6565
"\n",
6666
"### Install additional packages needed for this Notebook\n",
67-
"You need to install the popular plotting library matplotlib, the image manipulation library opencv, and the onnx library in the conda environment where Azure Maching Learning SDK is installed.\n",
67+
"You need to install the popular plotting library matplotlib, the image manipulation library opencv, and the onnx library in the conda environment where Azure Machine Learning SDK is installed.\n",
6868
"\n",
6969
"```\n",
7070
"(myenv) $ pip install matplotlib onnx opencv-python\n",
@@ -79,7 +79,7 @@
7979
"source": [
8080
"## 1. Obtain a model from the ONNX Model Zoo\n",
8181
"\n",
82-
"For more information on the Facial Emotion Recognition (FER+) model, you can explore the notebook explaning how to deploy [FER+ with ONNX Runtime on an ACI Instance](onnx-inference-facial-expression-recognition-deploy.ipynb)."
82+
"For more information on the Facial Emotion Recognition (FER+) model, you can explore the notebook explaining how to deploy [FER+ with ONNX Runtime on an ACI Instance](onnx-inference-facial-expression-recognition-deploy.ipynb)."
8383
]
8484
},
8585
{

include/onnxruntime/core/platform/EigenNonBlockingThreadPool.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -1129,7 +1129,7 @@ class ThreadPoolTempl : public onnxruntime::concurrency::ExtendedThreadPoolInter
11291129
//
11301130
// Ensure that the ThreadPoolParallelSection has sufficient workers to
11311131
// execute a loop with degree of parallelism n. We track the number
1132-
// of workers already avaiable to the parallel section, prior to
1132+
// of workers already available to the parallel section, prior to
11331133
// submitting tasks to the work queues to make up the total.
11341134
//
11351135
// Each worker will call in to worker_fn(idx) with a per-worker thread

include/onnxruntime/core/providers/cuda/cuda_context.h

+8-4
Original file line numberDiff line numberDiff line change
@@ -53,21 +53,25 @@ struct CudaContext : public CustomOpContext {
5353
cudnn_conv_use_max_workspace = FetchResource<bool>(kernel_ctx, CudaResource::cudnn_conv_use_max_workspace_t);
5454

5555
cudnn_conv1d_pad_to_nc1d = FetchResource<bool>(kernel_ctx, CudaResource::cudnn_conv1d_pad_to_nc1d_t);
56-
enable_skip_layer_norm_strict_mode = FetchResource<bool>(kernel_ctx, CudaResource::enable_skip_layer_norm_strict_mode_t);
56+
enable_skip_layer_norm_strict_mode = FetchResource<bool>(
57+
kernel_ctx, CudaResource::enable_skip_layer_norm_strict_mode_t);
5758
prefer_nhwc = FetchResource<bool>(kernel_ctx, CudaResource::prefer_nhwc_t);
5859
use_tf32 = FetchResource<bool>(kernel_ctx, CudaResource::use_tf32_t);
5960
}
6061

6162
template <typename T>
6263
T FetchResource(const OrtKernelContext& kernel_ctx, CudaResource resource_type) {
6364
if constexpr (sizeof(T) > sizeof(void*)) {
64-
ORT_CXX_API_THROW("void* is not large enough to hold resource type: " + std::to_string(resource_type), OrtErrorCode::ORT_INVALID_ARGUMENT);
65+
ORT_CXX_API_THROW("void* is not large enough to hold resource type: " + std::to_string(resource_type),
66+
OrtErrorCode::ORT_INVALID_ARGUMENT);
6567
}
6668
const auto& ort_api = Ort::GetApi();
6769
void* resource = {};
68-
OrtStatus* status = ort_api.KernelContext_GetResource(&kernel_ctx, ORT_CUDA_RESOUCE_VERSION, resource_type, &resource);
70+
OrtStatus* status = ort_api.KernelContext_GetResource(
71+
&kernel_ctx, ORT_CUDA_RESOURCE_VERSION, resource_type, &resource);
6972
if (status) {
70-
ORT_CXX_API_THROW("Failed to fetch cuda ep resource, resouce type: " + std::to_string(resource_type), OrtErrorCode::ORT_RUNTIME_EXCEPTION);
73+
ORT_CXX_API_THROW("Failed to fetch cuda ep resource, resource type: " + std::to_string(resource_type),
74+
OrtErrorCode::ORT_RUNTIME_EXCEPTION);
7175
}
7276
T t = {};
7377
memcpy(&t, &resource, sizeof(T));

include/onnxruntime/core/providers/cuda/cuda_resource.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
#include "core/providers/resource.h"
55

6-
#define ORT_CUDA_RESOUCE_VERSION 3
6+
#define ORT_CUDA_RESOURCE_VERSION 3
77

88
enum CudaResource : int {
99
cuda_stream_t = cuda_resource_offset, // 10000

include/onnxruntime/core/providers/rocm/rocm_context.h

+6-3
Original file line numberDiff line numberDiff line change
@@ -23,21 +23,24 @@ struct RocmContext : public CustomOpContext {
2323
void* resource = {};
2424
OrtStatus* status = nullptr;
2525

26-
status = ort_api.KernelContext_GetResource(&kernel_ctx, ORT_ROCM_RESOUCE_VERSION, RocmResource::hip_stream_t, &resource);
26+
status = ort_api.KernelContext_GetResource(
27+
&kernel_ctx, ORT_ROCM_RESOURCE_VERSION, RocmResource::hip_stream_t, &resource);
2728
if (status) {
2829
ORT_CXX_API_THROW("failed to fetch hip stream", OrtErrorCode::ORT_RUNTIME_EXCEPTION);
2930
}
3031
hip_stream = reinterpret_cast<hipStream_t>(resource);
3132

3233
resource = {};
33-
status = ort_api.KernelContext_GetResource(&kernel_ctx, ORT_ROCM_RESOUCE_VERSION, RocmResource::miopen_handle_t, &resource);
34+
status = ort_api.KernelContext_GetResource(
35+
&kernel_ctx, ORT_ROCM_RESOURCE_VERSION, RocmResource::miopen_handle_t, &resource);
3436
if (status) {
3537
ORT_CXX_API_THROW("failed to fetch miopen handle", OrtErrorCode::ORT_RUNTIME_EXCEPTION);
3638
}
3739
miopen_handle = reinterpret_cast<miopenHandle_t>(resource);
3840

3941
resource = {};
40-
status = ort_api.KernelContext_GetResource(&kernel_ctx, ORT_ROCM_RESOUCE_VERSION, RocmResource::rocblas_handle_t, &resource);
42+
status = ort_api.KernelContext_GetResource(
43+
&kernel_ctx, ORT_ROCM_RESOURCE_VERSION, RocmResource::rocblas_handle_t, &resource);
4144
if (status) {
4245
ORT_CXX_API_THROW("failed to fetch rocblas handle", OrtErrorCode::ORT_RUNTIME_EXCEPTION);
4346
}

include/onnxruntime/core/providers/rocm/rocm_resource.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33

44
#include "core/providers/resource.h"
55

6-
#define ORT_ROCM_RESOUCE_VERSION 1
6+
#define ORT_ROCM_RESOURCE_VERSION 1
77

88
enum RocmResource : int {
99
hip_stream_t = rocm_resource_offset,

include/onnxruntime/core/session/onnxruntime_c_api.h

+10-9
Original file line numberDiff line numberDiff line change
@@ -473,13 +473,13 @@ typedef struct OrtCUDAProviderOptions {
473473

474474
/** \brief Enable TunableOp for using.
475475
* Set it to 1/0 to enable/disable TunableOp. Otherwise, it is disabled by default.
476-
* This option can be overriden by environment variable ORT_CUDA_TUNABLE_OP_ENABLE.
476+
* This option can be overridden by environment variable ORT_CUDA_TUNABLE_OP_ENABLE.
477477
*/
478478
int tunable_op_enable;
479479

480480
/** \brief Enable TunableOp for tuning.
481481
* Set it to 1/0 to enable/disable TunableOp tuning. Otherwise, it is disabled by default.
482-
* This option can be overriden by environment variable ORT_CUDA_TUNABLE_OP_TUNING_ENABLE.
482+
* This option can be overridden by environment variable ORT_CUDA_TUNABLE_OP_TUNING_ENABLE.
483483
*/
484484
int tunable_op_tuning_enable;
485485

@@ -562,13 +562,13 @@ typedef struct OrtROCMProviderOptions {
562562

563563
/** \brief Enable TunableOp for using.
564564
* Set it to 1/0 to enable/disable TunableOp. Otherwise, it is disabled by default.
565-
* This option can be overriden by environment variable ORT_ROCM_TUNABLE_OP_ENABLE.
565+
* This option can be overridden by environment variable ORT_ROCM_TUNABLE_OP_ENABLE.
566566
*/
567567
int tunable_op_enable;
568568

569569
/** \brief Enable TunableOp for tuning.
570570
* Set it to 1/0 to enable/disable TunableOp tuning. Otherwise, it is disabled by default.
571-
* This option can be overriden by environment variable ORT_ROCM_TUNABLE_OP_TUNING_ENABLE.
571+
* This option can be overridden by environment variable ORT_ROCM_TUNABLE_OP_TUNING_ENABLE.
572572
*/
573573
int tunable_op_tuning_enable;
574574

@@ -2798,7 +2798,7 @@ struct OrtApi {
27982798
* "initial_growth_chunk_size_bytes": (Possible) Size of the second allocation in the arena.
27992799
* Only relevant if arena strategy is `kNextPowerOfTwo`. Use -1 to allow ORT to choose the default.
28002800
* "max_power_of_two_extend_bytes": The maximum enxtend size if arena strategy is `kNextPowerOfTwo`.
2801-
* It is not an allocation limit, it is only a limit for extention when requested byte is less than the limit.
2801+
* It is not an allocation limit, it is only a limit for extension when requested byte is less than the limit.
28022802
* When requested bytes is more than the limit, allocator will still return as requested.
28032803
* Use -1 to allow ORT to choose the default 1GB for max_power_of_two_extend_bytes.
28042804
* Ultimately, the allocation size is determined by the allocation memory request.
@@ -4467,13 +4467,14 @@ struct OrtApi {
44674467
* E.g. a cuda stream or a cublas handle
44684468
*
44694469
* \param context - Kernel context
4470-
* \param resouce_version - Version of the resource
4470+
* \param resource_version - Version of the resource
44714471
* \param resource_id - Type of resource
44724472
* \param resource - A pointer to returned resource
44734473
*
44744474
* \since Version 1.16.
44754475
*/
4476-
ORT_API2_STATUS(KernelContext_GetResource, _In_ const OrtKernelContext* context, _In_ int resouce_version, _In_ int resource_id, _Outptr_ void** resource);
4476+
ORT_API2_STATUS(KernelContext_GetResource, _In_ const OrtKernelContext* context, _In_ int resource_version,
4477+
_In_ int resource_id, _Outptr_ void** resource);
44774478

44784479
/** \brief Set user logging function
44794480
*
@@ -4528,10 +4529,10 @@ struct OrtApi {
45284529
ORT_API2_STATUS(ShapeInferContext_GetAttribute, _In_ const OrtShapeInferContext* context, _In_ const char* attr_name, _Outptr_ const OrtOpAttr** attr);
45294530

45304531
/**
4531-
* Set type and shape info of an ouput
4532+
* Set type and shape info of an output
45324533
*
45334534
* \param[in] context
4534-
* \param[in] index The index of the ouput
4535+
* \param[in] index The index of the output
45354536
* \param[out] info Type shape info of the output
45364537
*
45374538
* \since Version 1.17.

include/onnxruntime/core/session/onnxruntime_lite_custom_op.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -403,7 +403,7 @@ using Variadic = TensorArray;
403403
Note:
404404
OrtLiteCustomOp inherits from OrtCustomOp to bridge tween a custom func/struct and ort core.
405405
The lifetime of an OrtLiteCustomOp instance is managed by customer code, not ort, so:
406-
1. DO NOT cast OrtLiteCustomOp to OrtCustomOp and release since there is no virtual destructor in the hierachy.
406+
1. DO NOT cast OrtLiteCustomOp to OrtCustomOp and release since there is no virtual destructor in the hierarchy.
407407
2. OrtLiteCustomFunc and OrtLiteCustomStruct, as two sub-structs, can be released in form of OrtLiteCustomOp since all members are kept in the OrtLiteCustomOp,
408408
hence memory could still be recycled properly.
409409
Further, OrtCustomOp is a c struct bearing no v-table, so offspring structs are by design to be of zero virtual functions to maintain cast safety.

java/build.gradle

+1-1
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ java {
5454
targetCompatibility = JavaVersion.VERSION_1_8
5555
}
5656

57-
// This jar tasks serves as a CMAKE signalling
57+
// This jar tasks serves as a CMAKE signaling
5858
// mechanism. The jar will be overwritten by allJar task
5959
jar {
6060
}

java/src/main/java/ai/onnxruntime/OnnxRuntime.java

+1-1
Original file line numberDiff line numberDiff line change
@@ -438,7 +438,7 @@ private static String mapLibraryName(String library) {
438438
/**
439439
* Extracts the providers array from the C API, converts it into an EnumSet.
440440
*
441-
* <p>Throws IllegalArgumentException if a provider isn't recognised (note this exception should
441+
* <p>Throws IllegalArgumentException if a provider isn't recognized (note this exception should
442442
* only happen during development of ONNX Runtime, if it happens at any other point, file an issue
443443
* on <a href="https://github.com/microsoft/onnxruntime">GitHub</a>).
444444
*

java/src/main/java/ai/onnxruntime/providers/package-info.java

+1-1
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,5 @@
33
* Licensed under the MIT License.
44
*/
55

6-
/** Classes for controlling the behaviour of ONNX Runtime Execution Providers. */
6+
/** Classes for controlling the behavior of ONNX Runtime Execution Providers. */
77
package ai.onnxruntime.providers;

java/src/test/java/sample/ScoreMNIST.java

+1-1
Original file line numberDiff line numberDiff line change
@@ -242,7 +242,7 @@ public static void writeDataSKL(float[][] data, int[] indices, float[] values) {
242242
/**
243243
* Find the maximum probability and return it's index.
244244
*
245-
* @param probabilities The probabilites.
245+
* @param probabilities The probabilities.
246246
* @return The index of the max.
247247
*/
248248
public static int pred(float[] probabilities) {

js/web/lib/onnxjs/backends/webgl/glsl-coordinate-lib.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -1234,7 +1234,7 @@ export class CoordsGlslLib extends GlslLib {
12341234
}
12351235

12361236
/**
1237-
* This is the main function to map from the given texture coordiantes (s,t)
1237+
* This is the main function to map from the given texture coordinates (s,t)
12381238
* to logical indices for the output
12391239
* There will only be one single variation of this
12401240
* Also see coordsToOffset and offsetToIndices for input-specific versions

js/web/lib/onnxjs/backends/webgl/ops/pack.ts

+1-1
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ function getOutOfBoundsCondition(rank: number, shape: readonly number[], dims: s
8585
}
8686

8787
/**
88-
* code snippet to sample input texture with output coordiantes
88+
* code snippet to sample input texture with output coordinates
8989
*/
9090
function getOutput(shape: readonly number[], dims: string[]): string {
9191
const rank = shape.length;

onnxruntime/contrib_ops/cpu/attnlstm/deep_cpu_attn_lstm.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ using onnxruntime::rnn::detail::Direction;
1919
using onnxruntime::rnn::detail::MakeDirection;
2020

2121
// The class represents DeepCPU implementation of a long short term memory (LSTM) plus a Bahdanau Attention wraper.
22-
// The equivilent python usage could be checked int the corresponding op test directory, attention_lstm_data_gen.py.
22+
// The equivalent python usage could be checked int the corresponding op test directory, attention_lstm_data_gen.py.
2323
// Also please note that detail implementation re-used lot of code from current ONNXRuntime LSTM operator, refactor
2424
// is needed in future if this is become part of ONNX.
2525
class DeepCpuAttnLstmOp final : public OpKernel {

onnxruntime/contrib_ops/cpu/transformers/sampling_cpu_helper.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ Status Sample(AllocatorPtr& allocator,
152152
1,
153153
generator,
154154
*sampled_idx));
155-
// TODO: update presense_mask()
155+
// TODO: update presence_mask()
156156
#ifdef DEBUG_GENERATION
157157
dumper->Print("sampled_idx", *sampled_idx);
158158
#endif

onnxruntime/core/codegen/common/common.cc

+1-1
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ std::unique_ptr<ComputeCapability> ToCapacity(const onnxruntime::GraphViewer& gr
159159
ORT_THROW_IF_ERROR(node.ForEachWithIndex(node.ImplicitInputDefs(), process_input_fn));
160160

161161
// Handle outouts
162-
// two cases are considerd as outputs
162+
// two cases are considered as outputs
163163
// 1. Output NodeArg is not used by any Node
164164
// 2. Output NodeArg is used by at least one Node out of this subgraph.
165165
// Note a NodeArg can be used by Nodes in and out of the subgraph at the same time.

onnxruntime/core/codegen/mti/common.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
#define MTI_ASSERT(condition) \
1010
if (!(condition)) { \
11-
std::string error_msg = "Not satsified: " #condition \
11+
std::string error_msg = "Not satisfied: " #condition \
1212
": line " + \
1313
std::to_string(__LINE__) + \
1414
" in file " + std::string(__FILE__) + "\n"; \

onnxruntime/core/codegen/passes/scheduler/schedule_utils.cc

+2-2
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ bool ShouldTryVectorization(
7474
// Check the schedule of tensor
7575
// If it is not scheduled, try to vectorize it.
7676
// Note TryVectorization has to use with compute_root.
77-
// Therefore, there is a safty check of tensor's schedule
77+
// Therefore, there is a safety check of tensor's schedule
7878
bool TryVectorization(
7979
const tvm::Tensor& tensor,
8080
int64_t natural_vector_size,
@@ -124,7 +124,7 @@ bool TryVectorization(
124124
// Check the schedule of tensor
125125
// If it is not scheduled, try to add compute_inline on it.
126126
// Note TryInlineSchedule cannot be used with compute_root.
127-
// Therefore, there is a safty check of tensor's schedule.
127+
// Therefore, there is a safety check of tensor's schedule.
128128
bool TryInlineSchedule(
129129
const tvm::Tensor& tensor,
130130
ScheduleContext& ctx) {

onnxruntime/core/codegen/passes/scheduler/schedule_utils.h

+2-2
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ bool ShouldTryVectorization(
3434
// Check the schedule of tensor
3535
// If it is not scheduled, try to vectorize it.
3636
// Note TryVectorization has to use with compute_root.
37-
// Therefore, there is a safty check of tensor's schedule
37+
// Therefore, there is a safety check of tensor's schedule
3838
bool TryVectorization(
3939
const tvm::Tensor& tensor,
4040
int64_t natural_vector_size,
@@ -43,7 +43,7 @@ bool TryVectorization(
4343
// Check the schedule of tensor
4444
// If it is not scheduled, try to add compute_inline on it.
4545
// Note TryInlineSchedule cannot be used with compute_root.
46-
// Therefore, there is a safty check of tensor's schedule.
46+
// Therefore, there is a safety check of tensor's schedule.
4747
bool TryInlineSchedule(
4848
const tvm::Tensor& tensor,
4949
ScheduleContext& ctx);

onnxruntime/core/codegen/passes/scheduler/tvm_schedule_builder.cc

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ void TVMScheduleBuilder::DumpAllSchedulers() const {
3939

4040
d->ForEach([&stream](const std::string& key, Scheduler* op) {
4141
stream << "Key " << key
42-
<< ", Creater " << op->Name() << std::endl;
42+
<< ", Creator " << op->Name() << std::endl;
4343
});
4444

4545
++count;

onnxruntime/core/codegen/passes/weight_layout/weight_layout.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ namespace tvm_codegen {
1313

1414
using CoordTransFunc = std::function<tvm::Array<tvm::Expr>(const tvm::Array<tvm::Expr>&)>;
1515

16-
// WeightLayout is data layout trasnformer for weight/initializer
16+
// WeightLayout is data layout transformer for weight/initializer
1717
class WeightLayout {
1818
public:
1919
// Static function to return unique string as a key

0 commit comments

Comments
 (0)