Skip to content

Commit bdcfa01

Browse files
Yulv-gitdriazati
andauthored
[Fix] Fix some typos (apache#11503)
Fix some typos in src/. Co-authored-by: driazati <[email protected]>
1 parent 9d6039b commit bdcfa01

File tree

84 files changed

+113
-115
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

84 files changed

+113
-115
lines changed

NEWS.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1262,7 +1262,7 @@ The community also continues to bring high quality improvements to the existing
12621262
* Fixed div by zero core dump. Fixed rounding intrinsics on int crash #5026
12631263
* Test case modified for int type #5012
12641264
* Bug Fix for ARM CPUs. Lower strict assumption. #5063
1265-
* Triage the testcases to fit the the new namespaces #5071
1265+
* Triage the testcases to fit the new namespaces #5071
12661266
* Add colors to `compute_at` edges and thread/block indices. #5111
12671267
* Temporary fix to the stack overflow issue in autotvm task extraction #5019
12681268
* Fix compilation of If-Elses #5040
@@ -2223,7 +2223,7 @@ Rust language support in TVM includes two parts. 1. The frontend wraps the curre
22232223
### Bug Fixes
22242224
* [RELAY] Fix `get_int_tuple`. (#2691)
22252225
* [ARITH] Select support for integer set analysis. (#2687)
2226-
* [Relay] Fix error in ANF (too agressively inline atomic expression and create free variable). (#2665)
2226+
* [Relay] Fix error in ANF (too aggressively inline atomic expression and create free variable). (#2665)
22272227
* [Hybrid Script] Fix name conflict and attached scope problem. (#2649)
22282228
* [Relay] Fix ANF for reference and pattern matching. (#2637)
22292229
* [Relay] Fix fusion bug when call symbol that is not an operator. (#2630)

apps/ios_rpc/tvmrpc/TVMRuntime.mm

+1-1
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
#include <../../../src/runtime/file_utils.h>
3232

3333
#if defined(USE_CUSTOM_DSO_LOADER) && USE_CUSTOM_DSO_LOADER == 1
34-
// internal TVM header to achive Library class
34+
// internal TVM header to achieve Library class
3535
#include <../../../src/runtime/library_module.h>
3636
#include <custom_dlfcn.h>
3737
#endif

apps/topi_recipe/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ and optimizing tvm generated kernels. The goal:
2626

2727
## Guidelines
2828
- Use numpy-style naming convention for known ops
29-
- Seperate operator declaration from schedule when possible.
29+
- Separate operator declaration from schedule when possible.
3030
- This can be inconvenient but enables more general scheduling across ops.
3131
- We can always recover the tensors from its outputs by traversing the tree.
3232
- Deliberately assert the requirements

cmake/modules/HexagonSDK.cmake

+1-1
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ function(_check_all_paths_exist _paths _output_variable)
4444
if(_out_path)
4545
list(APPEND _out_paths "${_out_path}")
4646
else()
47-
set_parent(${_ouput_variable} "${_path}-NOTFOUND")
47+
set_parent(${_output_variable} "${_path}-NOTFOUND")
4848
return()
4949
endif()
5050
endforeach()

docs/arch/pass_infra.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ scheme through `Sequential`_ and `Block`_, respectively. With such constructs,
5151
these modern frameworks are able to conveniently add modules/layers to their
5252
containers and build up neural networks easily.
5353

54-
The design of the Relay pass infra is largely inspired by the the hierarchical
54+
The design of the Relay pass infra is largely inspired by the hierarchical
5555
pass manager used in LLVM and the block-style containers used in the popular
5656
deep learning frameworks. The major goals of the pass infra include:
5757

docs/arch/security.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ We strongly encourage folks to report such problems to our private security mail
2828

2929
Please note that the security mailing list should only be used for reporting undisclosed security vulnerabilities and managing the process of fixing such vulnerabilities. We cannot accept regular bug reports or other queries at this address. All mail sent to this address that does not relate to an undisclosed security problem in our source code will be ignored.
3030
Questions about: if a vulnerability applies to your particular application obtaining further information on a published vulnerability availability of patches
31-
and/or new releases should be addressed to to the user discuss forum.
31+
and/or new releases should be addressed to the user Discuss forum.
3232

3333
The private security mailing address is: `[email protected] <[email protected]>`_.
3434
Feel free to consult the `Apache Security guide <https://www.apache.org/security/>`_.

docs/how_to/deploy/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ After you get the TVM runtime library, you can link the compiled library
7070

7171
A model (optimized or not by TVM) can be cross compiled by TVM for
7272
different architectures such as ``aarch64`` on a ``x64_64`` host. Once the model
73-
is cross compiled it is neccessary to have a runtime compatible with the target
73+
is cross compiled it is necessary to have a runtime compatible with the target
7474
architecture to be able to run the cross compiled model.
7575

7676

docs/index.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@
1818
Apache TVM Documentation
1919
========================
2020

21-
Welcome to the the documentation for Apache TVM, a deep learning compiler that
21+
Welcome to the documentation for Apache TVM, a deep learning compiler that
2222
enables access to high-performance machine learning anywhere for everyone.
2323
TVM's diverse community of hardware vendors, compiler engineers and ML
2424
researchers work together to build a unified, programmable software stack, that

docs/reference/langref/hybrid_script.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -110,7 +110,7 @@ In HalideIR, loops have in total 4 types: ``serial``, ``unrolled``, ``parallel``
110110

111111
Here we use ``range`` aka ``serial``, ``unroll``, ``parallel``, and ``vectorize``,
112112
these **4** keywords to annotate the corresponding types of for loops.
113-
The the usage is roughly the same as Python standard ``range``.
113+
The usage is roughly the same as Python standard ``range``.
114114

115115
Besides all the loop types supported in Halide, ``const_range`` is supported for some specific conditions.
116116
Sometimes, ``tvm.container.Array`` is desired to pass as an argument, but in TVM-HalideIR, there is no

gallery/how_to/optimize_operators/opt_gemm.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -346,7 +346,7 @@
346346
###################################################################################################
347347
# Parallel
348348
# --------
349-
# Futhermore, we can also utilize multi-core processors to do the thread-level parallelization.
349+
# Furthermore, we can also utilize multi-core processors to do the thread-level parallelization.
350350

351351
s = te.create_schedule(C.op)
352352

include/tvm/ir/type.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -425,7 +425,7 @@ class FuncTypeNode : public TypeNode {
425425
Array<TypeVar> type_params;
426426
/*!
427427
* \brief potential constraint the type need to obey
428-
* \note this field is reserved for futher purposes.
428+
* \note this field is reserved for further purposes.
429429
*/
430430
Array<TypeConstraint> type_constraints;
431431

include/tvm/meta_schedule/cost_model.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ class PyCostModelNode : public CostModelNode {
105105
* \brief Predict the running results of given measure candidates.
106106
* \param context The tuning context.
107107
* \param candidates The measure candidates.
108-
* \param p_addr The address to save the the estimated running results.
108+
* \param p_addr The address to save the estimated running results.
109109
*/
110110
using FPredict = runtime::TypedPackedFunc<void(const TuneContext&, const Array<MeasureCandidate>&,
111111
void* p_addr)>;

include/tvm/relay/transform.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ TVM_DLL Pass FoldConstant(bool fold_qnn = false);
125125
TVM_DLL Pass SplitArgs(int max_function_args);
126126

127127
/*!
128-
* \brief Fuse operations into expr into seperate functions.
128+
* \brief Fuse operations into expr into separate functions.
129129
*
130130
* \param fuse_opt_level Optimization level. If it is -1 it will be inferred from pass context.
131131
*

include/tvm/runtime/c_backend_api.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ extern "C" {
4040
* \param args The arguments
4141
* \param type_codes The type codes of the arguments
4242
* \param num_args Number of arguments.
43-
* \param out_ret_value The output value of the the return value.
43+
* \param out_ret_value The output value of the return value.
4444
* \param out_ret_tcode The output type code of the return value.
4545
* \param resource_handle Pointer to associated resource.
4646
*

include/tvm/runtime/crt/module.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ typedef struct TVMModule {
4242
/*!
4343
* \brief Create a new module handle from the given TVMModule instance.
4444
* \param mod The module instance to register.
45-
* \param out_handle Pointer to recieve the newly-minted handle for this module.
45+
* \param out_handle Pointer to receive the newly-minted handle for this module.
4646
* \return 0 on success, non-zero on error.
4747
*/
4848
int TVMModCreateFromCModule(const TVMModule* mod, TVMModuleHandle* out_handle);

include/tvm/runtime/vm/vm.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -334,7 +334,7 @@ class TVM_DLL VirtualMachine : public runtime::ModuleNode {
334334
/*!
335335
* \brief Set one input tensor with given index to set of input tensors if need copy to given
336336
* device. \param tensors the input tensors set (destination) \param tensor some tensor (not
337-
* neccessary DLTensor). \param index The input tensor index. \param dev device to copy if need.
337+
* necessary DLTensor). \param index The input tensor index. \param dev device to copy if need.
338338
*/
339339
void SetInputTensorWithIndex(std::vector<ObjectRef>& tensors, // NOLINT(*)
340340
const TVMArgValue& tensor, int index, Device dev);

include/tvm/target/target.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ class TargetNode : public Object {
4949
TargetKind kind;
5050
/*! \brief Target host information, must be Target type */
5151
Optional<ObjectRef> host;
52-
/*! \brief Tag of the the target, can be empty */
52+
/*! \brief Tag of the target, can be empty */
5353
String tag;
5454
/*! \brief Keys for this target */
5555
Array<String> keys;

include/tvm/te/schedule.h

+2-2
Original file line numberDiff line numberDiff line change
@@ -364,7 +364,7 @@ class Schedule : public ObjectRef {
364364
const Array<Operation>& readers);
365365
/*!
366366
* \brief Create a cache write tensor for producing tensor.
367-
* The the tensor will take over body of original tensor op.
367+
* The tensor will take over body of original tensor op.
368368
*
369369
* This function can be used to do data layout transformation.
370370
* If there is a split/fuse/reorder on the data parallel axis of tensor
@@ -381,7 +381,7 @@ class Schedule : public ObjectRef {
381381
TVM_DLL Array<Tensor> cache_write(const Array<Tensor>& tensor, const std::string& scope);
382382
/*!
383383
* \brief Create a cache write tensor for producing tensor.
384-
* The the tensor will take over body of original tensor op.
384+
* The tensor will take over body of original tensor op.
385385
*
386386
* This function can be used to do data layout transformation.
387387
* If there is a split/fuse/reorder on the data parallel axis of tensor

include/tvm/te/schedule_pass.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ Stmt ScheduleOps(Schedule s, Map<IterVar, Range> dom_map, bool debug_keep_trivia
9595
* Perform this translation before running any TIR optimizations.
9696
*
9797
* List of actions taken by the function:
98-
* - Remove occurences of te::Tensor, te::Operation in the IR
98+
* - Remove occurrences of te::Tensor, te::Operation in the IR
9999
* and replace them by corresponding IR nodes via tir::Buffer.
100100
* - Add annotation of extern buffers using the buffer_map field
101101
* in the PrimFunc type.

include/tvm/tir/op.h

+6-6
Original file line numberDiff line numberDiff line change
@@ -537,7 +537,7 @@ TVM_DLL PrimExpr isfinite(PrimExpr x, Span span = Span());
537537
TVM_DLL PrimExpr isinf(PrimExpr x, Span span = Span());
538538

539539
/*!
540-
* \brief sum of of source expression over axis
540+
* \brief sum of source expression over axis
541541
* \param source The source expression.
542542
* \param axis List of iteration variables that will be used for reduction.
543543
* \param init The value with which to initialize the output.
@@ -548,7 +548,7 @@ TVM_DLL PrimExpr sum(PrimExpr source, Array<tir::IterVar> axis, Array<PrimExpr>
548548
Span span = Span());
549549

550550
/*!
551-
* \brief logical And of of source expression over axis
551+
* \brief logical And of source expression over axis
552552
* \param source The source expression.
553553
* \param axis List of iteration variables that will be used for reduction.
554554
* \param init The value with which to initialize the output.
@@ -558,7 +558,7 @@ TVM_DLL PrimExpr all(PrimExpr source, Array<tir::IterVar> axis, Array<PrimExpr>
558558
Span span = Span());
559559

560560
/*!
561-
* \brief logical Or of of source expression over axis
561+
* \brief logical Or of source expression over axis
562562
* \param source The source expression.
563563
* \param axis List of iteration variables that will be used for reduction.
564564
* \param init The value with which to initialize the output.
@@ -569,7 +569,7 @@ TVM_DLL PrimExpr any(PrimExpr source, Array<tir::IterVar> axis, Array<PrimExpr>
569569
Span span = Span());
570570

571571
/*!
572-
* \brief max of of source expression over axis
572+
* \brief max of source expression over axis
573573
* \param source The source expression.
574574
* \param axis List of iteration variables that will be used for reduction.
575575
* \param init The value with which to initialize the output.
@@ -580,7 +580,7 @@ TVM_DLL PrimExpr max(PrimExpr source, Array<tir::IterVar> axis, Array<PrimExpr>
580580
Span span = Span());
581581

582582
/*!
583-
* \brief max of of source expression over axis
583+
* \brief max of source expression over axis
584584
* \param source The source expression.
585585
* \param axis List of iteration variables that will be used for reduction.
586586
* \param init The value with which to initialize the output.
@@ -591,7 +591,7 @@ TVM_DLL PrimExpr min(PrimExpr source, Array<tir::IterVar> axis, Array<PrimExpr>
591591
Span span = Span());
592592

593593
/*!
594-
* \brief product of of source expression over axis
594+
* \brief product of source expression over axis
595595
* \param source The source expression.
596596
* \param axis List of iteration variables that will be used for reduction.
597597
* \param init The value with which to initialize the output.

include/tvm/topi/transform.h

+5-5
Original file line numberDiff line numberDiff line change
@@ -641,7 +641,7 @@ inline Array<Tensor> split(const Tensor& x, Array<PrimExpr> split_indices, int a
641641
*
642642
* \param x The input tensor
643643
* \param begin The indices to begin with in the slicing
644-
* \param end Indicies indicating end of the slice
644+
* \param end Indices indicating end of the slice
645645
* \param strides Specifies the stride values, it can be negative
646646
* in that case, the input tensor will be reversed in that particular axis
647647
* \param name The name of the operation
@@ -698,7 +698,7 @@ inline Tensor dynamic_strided_slice(const Tensor& x, const Array<PrimExpr>& begi
698698
*
699699
* \param x The input tensor
700700
* \param begin The indices to begin with in the slicing
701-
* \param end Indicies indicating end of the slice
701+
* \param end Indices indicating end of the slice
702702
* \param strides Specifies the stride values, it can be negative
703703
* in that case, the input tensor will be reversed in that particular axis
704704
* \param name The name of the operation
@@ -729,7 +729,7 @@ inline te::Tensor dynamic_strided_slice(const te::Tensor& x, const te::Tensor& b
729729
*
730730
* \param ishape The input tensor shape
731731
* \param begin The indices to begin with in the slicing
732-
* \param end Indicies indicating end of the slice
732+
* \param end Indices indicating end of the slice
733733
* \param strides Specifies the stride values, it can be negative
734734
* in that case, the input tensor will be reversed in that particular axis
735735
* \param axes Axes along which slicing is applied. When it is specified, the length of begin, end,
@@ -755,7 +755,7 @@ inline Array<PrimExpr> StridedSliceOutputShape(
755755
*
756756
* \param x The input tensor
757757
* \param begin The indices to begin with in the slicing
758-
* \param end Indicies indicating end of the slice
758+
* \param end Indices indicating end of the slice
759759
* \param strides Specifies the stride values, it can be negative
760760
* in that case, the input tensor will be reversed in that particular axis
761761
* \param axes Axes along which slicing is applied. When it is specified, the length of begin, end,
@@ -803,7 +803,7 @@ inline Tensor strided_slice_with_axes(const Tensor& x, const Array<Integer>& beg
803803
*
804804
* \param x The input tensor
805805
* \param begin The indices to begin with in the slicing
806-
* \param end Indicies indicating end of the slice
806+
* \param end Indices indicating end of the slice
807807
* \param strides Specifies the stride values, it can be negative
808808
* in that case, the input tensor will be reversed in that particular axis
809809
* \param slice_mode Specifies the slice mode

python/tvm/_ffi/_ctypes/packed_func.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@
3939

4040

4141
def _ctypes_free_resource(rhandle):
42-
"""callback to free resources when it it not needed."""
42+
"""callback to free resources when it is not needed."""
4343
pyobj = ctypes.cast(rhandle, ctypes.py_object)
4444
ctypes.pythonapi.Py_DecRef(pyobj)
4545

python/tvm/_ffi/runtime_ctypes.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -283,7 +283,7 @@ def max_threads_per_block(self):
283283
def warp_size(self):
284284
"""Number of threads that execute concurrently.
285285
286-
Returns device value for for cuda, rocm, and vulkan. Returns
286+
Returns device value for cuda, rocm, and vulkan. Returns
287287
1 for metal and opencl devices, regardless of the physical
288288
device. Returns remote device value for RPC devices. Returns
289289
None for all other devices.

python/tvm/contrib/cutlass/build.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -324,7 +324,7 @@ def tune_cutlass_kernels(
324324
325325
split_k_slices : list of int
326326
Split factor candidates for split-K GEMM. If split-K > 1, the GEMM K-loop is computed in
327-
parallel accross split-K blocks, and a seperate global reduction kernel is launched to
327+
parallel across split-K blocks, and a separate global reduction kernel is launched to
328328
accumulate partial reductions. The profiler will pick the best split-k factor from the
329329
given candidate list. Note that the larger split-K factor requires a larger workspace.
330330
Currently, parallel split-k has been tested only for wgrad. For GEMM and other conv2d

python/tvm/contrib/ethosu/cascader/device_config.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@ def _get_output_cycles(
155155
ifm_dtype: str
156156
Datatype of the Input Feature Map tensor (IFM)
157157
ofm_dtype: str
158-
Datatype of the Ouput Feature Map tensor (OFM)
158+
Datatype of the Output Feature Map tensor (OFM)
159159
activation : str
160160
The activation function to use.
161161
"NONE" - no activation function.

python/tvm/contrib/pipeline_executor_build.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -586,7 +586,7 @@ def get_config(self):
586586
dep_item["input_name"] = dname
587587
dep_conf.append(dep_item)
588588

589-
# The value of ouput_idx start from 0.
589+
# The value of output_idx start from 0.
590590
output["output_idx"] = int(binding.name)
591591
output["dependencies"] = dep_conf
592592
output_conf.append(output)

python/tvm/driver/tvmc/micro.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ def create_project_handler(args):
266266
try:
267267
project.generate_project_from_mlf(template_dir, project_dir, mlf_path, options)
268268
except ServerError as error:
269-
print("The following error occured on the Project API server side: \n", error)
269+
print("The following error occurred on the Project API server side: \n", error)
270270
sys.exit(1)
271271

272272

@@ -292,7 +292,7 @@ def build_handler(args):
292292
prj = project.GeneratedProject.from_directory(project_dir, options=options)
293293
prj.build()
294294
except ServerError as error:
295-
print("The following error occured on the Project API server side: ", error)
295+
print("The following error occurred on the Project API server side: ", error)
296296
sys.exit(1)
297297

298298

@@ -310,5 +310,5 @@ def flash_handler(args):
310310
prj = project.GeneratedProject.from_directory(project_dir, options=options)
311311
prj.flash()
312312
except ServerError as error:
313-
print("The following error occured on the Project API server side: ", error)
313+
print("The following error occurred on the Project API server side: ", error)
314314
sys.exit(1)

python/tvm/error.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@
1717
"""Structured error classes in TVM.
1818
1919
Each error class takes an error message as its input.
20-
See the example sections for for suggested message conventions.
20+
See the example sections for suggested message conventions.
2121
To make the code more readable, we recommended developers to
2222
copy the examples and raise errors with the same message convention.
2323

python/tvm/ir/base.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -294,7 +294,7 @@ def structural_hash(node, map_free_vars=False):
294294
295295
map_free_vars : bool
296296
If map_free_vars is set to true, we will hash free variables
297-
by the order of their occurences. Otherwise, we will hash by
297+
by the order of their occurrences. Otherwise, we will hash by
298298
their in-memory pointer address.
299299
300300
Return

python/tvm/ir/diagnostics/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@
2929

3030
def get_renderer():
3131
"""
32-
Get the the diagnostic renderer.
32+
Get the diagnostic renderer.
3333
3434
Returns
3535
-------

python/tvm/relay/backend/contrib/ethosu/legalize.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -974,7 +974,7 @@ def __init__(self):
974974

975975

976976
class MeanRewriter(DFPatternCallback):
977-
"""Convert ethosu.mean composite functions to to an equivalent legalization:
977+
"""Convert ethosu.mean composite functions to an equivalent legalization:
978978
- Case 1 (axis == [1, 2] and keepsdims == True):
979979
ethosu_depthwise_conv2d + ethosu_binary_elementwise
980980
- Case 2 (ifm qparams == ofm qparams): ethosu_pooling

0 commit comments

Comments
 (0)