Skip to content

Commit 4795932

Browse files
TinyViT on non-tiled Siracusa (#117)
This PR brings the changes required for a working minimal TinyViT on the Siracusa platform, without tiling. ## Added - PULP 2D FP DW conv Im2Col template and kernel, with bias support. - Bias support for PULP 2D FP regular conv Im2Col in template & kernel. - PULP FP DW conv 2D parser. - FP conv 2D (simple & DW), reshape & skip connection, and TinyViT demo tests to the non-tiled Siracusa CI pipeline. - FP bindings and mappings for PULP slice, DW conv 2D, and reduce mean operations. - FP PULP DW conv lowering optimization pass similar to the existent one for integer version. - RemoveEmptyConvBiasPass to the PULP optimizer. - PULPClusterEngine now accepts a `n_cores` parameter to set the number of cores used - annotateNCores method to PULPDeployer that adds an `n_cores` key to all PULPClusterEngine templates' operatorRepresentations ## Changed - Reduced size of reshape & skip connection test, for non-tiled Siracusa memory compatibility. - changed _mapNode to _selectEngine which reduces the responsibility of that function to, as the name states, just engine selection ## Fixed - Fixed bug for non-batched elements in the PULPOpen FP GEMM and matmul templates. - Added underscore to the beginning of closure names to avoid naming issues when they start with unsupported first characters (like numbers). - Data types in the PULPOpen FP add and mul templates.
1 parent 397341f commit 4795932

File tree

27 files changed

+766
-154
lines changed

27 files changed

+766
-154
lines changed

.github/workflows/ci-platform-siracusa.yml

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,15 @@ jobs:
5353
testBacktracking
5454
testFloatAdder
5555
testFloatGEMM
56+
5657
testFloat2DConvolution
58+
testFloat2DConvolutionBias
59+
testFloat2DConvolutionZeroBias
60+
61+
testFloat2DDWConvolution
62+
testFloat2DDWConvolutionBias
63+
testFloat2DDWConvolutionZeroBias
64+
5765
testFloatLayerNorm
5866
testFloatRelu
5967
testFloatMaxPool
@@ -64,6 +72,7 @@ jobs:
6472
Quant
6573
Dequant
6674
testFloatReduceSum
75+
testFloatReshapeWithSkipConnection
6776
testFloatSoftmaxGrad
6877
testFloatSoftmaxCrossEntropy
6978
testFloatSoftmaxCrossEntropyGrad
@@ -87,4 +96,5 @@ jobs:
8796
CCT/CCT_1_16_16_8
8897
CCT/CCT_2_32_32_128_Opset20
8998
testTrainCCT/CCT1_Classifier_Training/CCT_1_16_16_8
99+
testFloatDemoTinyViT
90100
num-cores: 8

CHANGELOG.md

Lines changed: 19 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ This file contains the changelog for the Deeploy project. The changelog is divid
44
## Unreleased (Planned Release Target: v0.2.1)
55

66
### List of Pull Requests
7+
- TinyViT on non-tiled Siracusa [#117](https://github.com/pulp-platform/Deeploy/pull/117)
78
- Support Fully Asynchronous DMAs [#114](https://github.com/pulp-platform/Deeploy/pull/114)
89
- Disallow shape inference [#128](https://github.com/pulp-platform/Deeploy/pull/128)
910
- Remove memory-aware node bindings [#123](https://github.com/pulp-platform/Deeploy/pull/123)
@@ -24,6 +25,13 @@ This file contains the changelog for the Deeploy project. The changelog is divid
2425
- Fix bias hoisting in generic GEMM with no bias [#126](https://github.com/pulp-platform/Deeploy/pull/126)
2526

2627
### Added
28+
- PULP 2D FP DW conv Im2Col template and kernel, with bias support.
29+
- Bias support for PULP 2D FP regular conv Im2Col in template & kernel.
30+
- PULP FP DW conv 2D parser.
31+
- FP conv 2D (simple & DW), reshape & skip connection, and TinyViT demo tests to the non-tiled Siracusa CI pipeline.
32+
- FP bindings and mappings for PULP slice, DW conv 2D, and reduce mean operations.
33+
- FP PULP DW conv lowering optimization pass similar to the existent one for integer version.
34+
- RemoveEmptyConvBiasPass to the PULP optimizer.
2735
- Add manual type inference feature (CLI: `--input-type-map`/`--input-offset-map`) to resolve ambiguities when test inputs are not representative enough
2836
- Added a `testTypeInferenceDifferentTypes` test case to validate type inference for different input types
2937
- Added `_mangleNodeNames` function to avoid duplicate node mappings
@@ -58,8 +66,11 @@ This file contains the changelog for the Deeploy project. The changelog is divid
5866
- Added testFloatGEMMnobias
5967
- Profiling support and optional comments in generated DMA code for better traceability
6068
- Added new waiting-strategy logic with fine-grained `PerTensorWaitingStrategy`
69+
- PULPClusterEngine now accepts a `n_cores` parameter to set the number of cores used
70+
- annotateNCores method to PULPDeployer that adds an `n_cores` key to all PULPClusterEngine templates' operatorRepresentations
6171

6272
### Changed
73+
- Reduced size of reshape & skip connection test, for non-tiled Siracusa memory compatibility.
6374
- Replaced platform-specific tags (`*-amd64`, `*-arm64`) with direct digest references in `Noelware/docker-manifest-action`.
6475
- mchan HAL is now reduced to bare-bones
6576
- refactor of the IntrospectiveCodeTransformation to work on the Mako template
@@ -95,8 +106,12 @@ This file contains the changelog for the Deeploy project. The changelog is divid
95106
- Disabled ICCT_ITA_8 MemPool test because it was using a lowering that created shapeless tensors
96107
- Added missing shape annotation to the testTypeInferenceDifferentTypes
97108
- Refactored DMA code generation (`SnitchDma`, `Mchan`) to correctly overlap transfers and compute in double-buffering mode
109+
- changed `_mapNode` to `_selectEngine` which reduces the responsibility of that function to, as the name states, just engine selection
98110

99111
### Fixed
112+
- Fixed bug for non-batched elements in the PULPOpen FP GEMM and matmul templates.
113+
- Added underscore to the beginning of closure names to avoid naming issues when they start with unsupported first characters (like numbers).
114+
- Data types in the PULPOpen FP add and mul templates.
100115
- Prevent node duplication for graphs generated via GraphSurgeon
101116
- Resolved issue with missing `id` in the `Build Cache for Docker` step, used in the `Inject build-cache` step.
102117
- Fix license CI check and prevent potential issues with `jq` installation
@@ -185,9 +200,9 @@ This release containing major architectural changes, new platform support, enhan
185200

186201

187202
### Added
188-
- BatchNorm kernel
189-
- ConvTranspose kernel
190-
- MaxPool1D kernel
203+
- BatchNorm kernel
204+
- ConvTranspose kernel
205+
- MaxPool1D kernel
191206
- Template for 1D Convolution
192207
- Support for float32 data type in the previous kernels
193208
- Float binding for Pad1D kernel
@@ -326,7 +341,7 @@ This release containing major architectural changes, new platform support, enhan
326341

327342
### Changed
328343
- FloatConvTemplate file
329-
- Platform.py file
344+
- Platform.py file
330345
- Bump the CMake version to 3.24 as required for the chimera-sdk
331346
- Bump GVSoC's version and add chimera simulation target
332347
- Rename the generic source util to utils to avoid name collision with chimera-sdk

Deeploy/CommonExtensions/CodeTransformationPasses/Closure.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,8 @@ def apply(self,
155155
executionBlock: ExecutionBlock,
156156
name: str,
157157
verbose: CodeGenVerbosity = _NoVerbosity) -> Tuple[NetworkContext, ExecutionBlock]:
158-
self.closureName = name + self.closureSuffix
158+
# Prepend underscore to avoid name issues when beginning with problematic characters (like numbers)
159+
self.closureName = "_" + name + self.closureSuffix
159160
self.functionCall = executionBlock.generate(ctxt)
160161
self._generateClosureStruct(ctxt, executionBlock)
161162
ctxt = self._generateClosureCtxt(ctxt, name)

Deeploy/CommonExtensions/DataTypes.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -87,11 +87,11 @@ class float64_t(FloatImmediate):
8787

8888
SignedIntegerDataTypes: Tuple[Type[IntegerImmediate], ...] = (int8_t, int16_t, int32_t, int64_t)
8989
UnsignedIntegerDataTypes: Tuple[Type[IntegerImmediate], ...] = (uint8_t, uint16_t, uint32_t, uint64_t)
90-
IntegerDataTypes: Tuple[Type[IntegerImmediate], ...] = (sorted((
91-
*SignedIntegerDataTypes,
92-
*UnsignedIntegerDataTypes,
93-
),
94-
key = lambda _type: _type.typeWidth))
90+
IntegerDataTypes: Tuple[Type[IntegerImmediate], ...] = tuple(
91+
sorted((
92+
*SignedIntegerDataTypes,
93+
*UnsignedIntegerDataTypes,
94+
), key = lambda _type: _type.typeWidth))
9595
FloatDataTypes: Tuple[Type[FloatImmediate], ...] = (bfloat16_t, float16_t, float32_t, float64_t)
9696

9797

Deeploy/CommonExtensions/NetworkDeployers/NetworkDeployerWrapper.py

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,9 @@
22
#
33
# SPDX-License-Identifier: Apache-2.0
44

5-
from typing import Any, Union
6-
75
import onnx_graphsurgeon as gs
86

9-
from Deeploy.DeeployTypes import CodeGenVerbosity, NetworkContext, NetworkDeployer, ONNXLayer, _NoVerbosity
7+
from Deeploy.DeeployTypes import CodeGenVerbosity, DeploymentEngine, NetworkContext, NetworkDeployer, _NoVerbosity
108

119

1210
class NetworkDeployerWrapper(NetworkDeployer):
@@ -68,8 +66,8 @@ def generateBufferAllocationCode(self) -> str:
6866
return self._innerObject.generateBufferAllocationCode()
6967

7068
# MultiEngineDeployer augment
71-
def _mapNode(self, node: gs.Node) -> Union[ONNXLayer, Any]:
72-
return self._innerObject._mapNode(node)
69+
def _selectEngine(self, node: gs.Node) -> DeploymentEngine:
70+
return self._innerObject._selectEngine(node)
7371

7472
def _printMemorySummary(self):
7573
return self._innerObject._printMemorySummary()

Deeploy/DeeployTypes.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -325,15 +325,15 @@ def fromNode(cls, node: gs.Node):
325325
return (cls(name = node.name, shape = node.shape if not isinstance(node, gs.Constant) else node.values.shape))
326326

327327
def has_live_aliases(self, ctxt: NetworkContext) -> bool:
328-
"""Checks whether this VariableBuffer has any live ancestors, i.e. buffers that are still live and are aliased by this buffer.
328+
"""Checks whether this VariableBuffer has any live aliases, i.e. buffers that are still live and are aliased by this buffer.
329329
Parameters
330330
----------
331331
ctxt : NetworkContext
332332
Current NetworkContext
333333
Returns
334334
-------
335335
bool
336-
True if this VariableBuffer has any live ancestors, False otherwise
336+
True if this VariableBuffer has any live aliases, False otherwise
337337
"""
338338
# Do a breadth-first search across the aliasing double-linked list
339339
live = self._live
@@ -2562,10 +2562,10 @@ def codeTransform(self, verbose: CodeGenVerbosity = _NoVerbosity):
25622562
self.ctxt = layer.codeTransform(self.ctxt, verbose)
25632563
self.transformed = True
25642564

2565-
def _mapNode(self, node: gs.Node) -> Union[ONNXLayer, Any]:
2565+
def _selectEngine(self, node: gs.Node) -> DeploymentEngine:
25662566
for engine in self.Platform.engines:
25672567
if node.op in engine.Mapping:
2568-
return engine.Mapping[node.op](node)
2568+
return engine
25692569
raise RuntimeError(f"No mapping found for node {node.name} with op type {node.op}")
25702570

25712571
def _bindLayers(self):
@@ -2582,7 +2582,8 @@ def _bindLayers(self):
25822582
flatSchedule += subGraph
25832583

25842584
for node in flatSchedule:
2585-
layer = self._mapNode(node)
2585+
engine = self._selectEngine(node)
2586+
layer = engine.Mapping[node.op](node)
25862587
if isinstance(layer, ONNXLayer):
25872588
log.debug(f" {SUCCESS_MARK} Bind {node.name} to layer {layer.__class__.__name__}")
25882589
self.layerBinding[layer.node.name] = layer

Deeploy/EngineExtension/NetworkDeployers/EngineColoringDeployer.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,13 +2,13 @@
22
#
33
# SPDX-License-Identifier: Apache-2.0
44

5-
from typing import Any, Callable, Dict, Type, Union
5+
from typing import Callable, Dict, Type
66

77
import onnx_graphsurgeon as gs
88

99
from Deeploy.AbstractDataTypes import Pointer
1010
from Deeploy.CommonExtensions.NetworkDeployers.NetworkDeployerWrapper import NetworkDeployerWrapper
11-
from Deeploy.DeeployTypes import DeploymentPlatform, NetworkDeployer, ONNXLayer, Schedule, TopologyOptimizer
11+
from Deeploy.DeeployTypes import DeploymentEngine, DeploymentPlatform, NetworkDeployer, Schedule, TopologyOptimizer
1212
from Deeploy.EngineExtension.OptimizationPasses.TopologyOptimizationPasses.EngineColoringPasses import \
1313
EngineColoringPass, EngineMapper
1414

@@ -48,14 +48,14 @@ def lower(self, graph: gs.Graph) -> gs.Graph:
4848
) == 0, f"Missing engine color for nodes {[node.name for node in uncoloredNodes]} with operations {uncoloredOperations}"
4949
return graph
5050

51-
def _mapNode(self, node: gs.Node) -> Union[ONNXLayer, Any]:
51+
def _selectEngine(self, node: gs.Node) -> DeploymentEngine:
5252
assert "engine" in node.attrs, f"Node {node.name} doesn't have an engine color."
5353
engineName = node.attrs["engine"]
5454
assert isinstance(engineName, str) and engineName in self.engineDict, \
5555
f"Node {node.name} has an invalid engine {engineName} assigned."
5656
engine = self.engineDict[engineName]
5757
assert node.op in engine.Mapping, f"No mapping found for {node.op} in engine {engine.name}"
58-
return engine.Mapping[node.op](node)
58+
return engine
5959

6060

6161
class EngineColoringDeployerWrapper(EngineColoringDeployer, NetworkDeployerWrapper):

Deeploy/Targets/PULPOpen/Bindings.py

Lines changed: 36 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -9,13 +9,13 @@
99
from Deeploy.CommonExtensions.CodeTransformationPasses.Closure import ClosureGeneration, MemoryAwareClosureGeneration
1010
from Deeploy.CommonExtensions.CodeTransformationPasses.MemoryAllocation import ArgumentStructGeneration, \
1111
MemoryManagementGeneration, MemoryPassthroughGeneration
12-
from Deeploy.CommonExtensions.DataTypes import IntegerDataTypes, SignedIntegerDataTypes, float32_t, int8_t, int32_t, \
13-
uint8_t
12+
from Deeploy.CommonExtensions.DataTypes import FloatDataTypes, IntegerDataTypes, SignedIntegerDataTypes, float32_t, \
13+
int8_t, int32_t, int64_t, uint8_t
1414
from Deeploy.DeeployTypes import CodeTransformation, NodeBinding, NodeTemplate
1515
from Deeploy.FutureExtension.Bindings.AutoFutureBinding import AutoFutureBinding
1616
from Deeploy.FutureExtension.CodeTransformationPasses.FutureCodeTransformation import FutureGeneration
17-
from Deeploy.Targets.Generic.Templates import AddTemplate, ConcatTemplate, DequantTemplate, FloatReduceSumTemplate, \
18-
GatherTemplate, QuantTemplate, RQSiGELUTemplate, iHardswishTemplate
17+
from Deeploy.Targets.Generic.Templates import AddTemplate, ConcatTemplate, DequantTemplate, FloatReduceMeanTemplate, \
18+
FloatReduceSumTemplate, GatherTemplate, QuantTemplate, RQSiGELUTemplate, SliceTemplate, iHardswishTemplate
1919
from Deeploy.Targets.Generic.TypeCheckers import AddChecker, ConcatChecker, ConvChecker, DequantChecker, \
2020
GatherChecker, GELUChecker, GEMMChecker, HardswishChecker, LayerNormChecker, MatMulChecker, MulChecker, \
2121
QuantChecker, ReduceMeanChecker, ReluChecker, ReshapeChecker, RQAddChecker, RQHardswishChecker, SGDChecker, \
@@ -27,11 +27,11 @@
2727
from Deeploy.Targets.PULPOpen.DataTypes import PULPDMAFuture
2828
from Deeploy.Targets.PULPOpen.DMA.L3Dma import l3DmaHack
2929
from Deeploy.Targets.PULPOpen.DMA.MchanDma import MchanDma
30-
from Deeploy.Targets.PULPOpen.Templates import ConvTemplate, FloatAddTemplate, FloatConvTemplate, FloatGELUTemplate, \
31-
FloatGemmTemplate, FloatLayernormTemplate, FloatMatMulTemplate, FloatMaxPoolTemplate, FloatMulTemplate, \
32-
FloatReluTemplate, FloatSoftmaxTemplate, GEMMTemplate, MatrixVectorTemplate, MaxPool2DTemplate, MulTemplate, \
33-
ReduceMeanTemplate, RequantShiftTemplate, ReshapeTemplate, RQAddTemplate, RQSiHardswishTemplate, SGDTemplate, \
34-
SliceTemplate, SoftmaxCrossEntropyLossTemplate, TallGEMMTemplate, TransposeTemplate, UniformRequantShiftTemplate, \
30+
from Deeploy.Targets.PULPOpen.Templates import ConvTemplate, DMASliceTemplate, FloatAddTemplate, FloatConvTemplate, \
31+
FloatGELUTemplate, FloatGemmTemplate, FloatLayernormTemplate, FloatMatMulTemplate, FloatMaxPoolTemplate, \
32+
FloatMulTemplate, FloatReluTemplate, FloatSoftmaxTemplate, GEMMTemplate, MatrixVectorTemplate, MaxPool2DTemplate, \
33+
MulTemplate, ReduceMeanTemplate, RequantShiftTemplate, ReshapeTemplate, RQAddTemplate, RQSiHardswishTemplate, \
34+
SGDTemplate, SoftmaxCrossEntropyLossTemplate, TallGEMMTemplate, TransposeTemplate, UniformRequantShiftTemplate, \
3535
iRMSNormTemplate, iSoftmaxTemplate
3636
from Deeploy.Targets.PULPOpen.TypeCheckers import PULPConvChecker, PULPLinearChecker, PULPMaxPoolChecker, \
3737
PULPRequantShiftChecker
@@ -148,16 +148,24 @@
148148
PointerClass(uint8_t),
149149
PointerClass(uint8_t),
150150
PointerClass(uint8_t)
151-
], [PULPDMAFuture(underlyingType = type)]), SliceTemplate.referenceTemplate, MemoryAwareForkTransformer)
151+
], [PULPDMAFuture(underlyingType = type)]), DMASliceTemplate.referenceTemplate, MemoryAwareForkTransformer)
152152
for type in IntegerDataTypes
153153
]
154154

155+
PULPSliceBindings = [
156+
NodeBinding(
157+
SliceChecker([
158+
PointerClass(type),
159+
PointerClass(uint8_t),
160+
PointerClass(uint8_t),
161+
PointerClass(uint8_t),
162+
PointerClass(uint8_t)
163+
], [PointerClass(type)]), SliceTemplate.referenceTemplate, ForkTransformer) for type in FloatDataTypes
164+
]
165+
155166
PULPReshapeBindings = [
156-
NodeBinding(ReshapeChecker([PointerClass(type), PointerClass(int32_t)], [PointerClass(type)]),
157-
ReshapeTemplate.referenceTemplate, SkipTransformer) for type in IntegerDataTypes
158-
] + [
159-
NodeBinding(ReshapeChecker([PointerClass(float32_t), PointerClass(type)], [PointerClass(float32_t)]),
160-
ReshapeTemplate.referenceTemplate, SkipTransformer) for type in IntegerDataTypes
167+
NodeBinding(ReshapeChecker([PointerClass(type), PointerClass(int64_t)], [PointerClass(type)]),
168+
ReshapeTemplate.referenceTemplate, SkipTransformer) for type in IntegerDataTypes + FloatDataTypes
161169
]
162170

163171
PULPRQAddBindings = [
@@ -225,6 +233,14 @@
225233
ForkTransformer)
226234
]
227235

236+
PULPFloatDWConv2DBindings = [
237+
NodeBinding(
238+
ConvChecker(
239+
[PointerClass(float_type), PointerClass(float_type),
240+
PointerClass(float_type)], [PointerClass(float_type)]), FloatConvTemplate.referenceDW2DIm2ColTemplate,
241+
ForkTransformer) for float_type in FloatDataTypes
242+
]
243+
228244
PULPRQSMatrixVecBindings = [
229245
NodeBinding(
230246
PULPLinearChecker([PointerClass(type1),
@@ -276,6 +292,11 @@
276292
PULPReduceMeanBindings = [
277293
NodeBinding(ReduceMeanChecker([PointerClass(type)], [PointerClass(type)]), ReduceMeanTemplate.referenceTemplate,
278294
ClusterTransformer) for type in IntegerDataTypes
295+
] + [
296+
NodeBinding(ReduceMeanChecker([PointerClass(float_type), PointerClass(integer_type)], [PointerClass(float_type)]),
297+
FloatReduceMeanTemplate.referenceTemplate, ClusterTransformer)
298+
for integer_type in SignedIntegerDataTypes
299+
for float_type in FloatDataTypes
279300
]
280301

281302
PULPReduceSumBindings = [

Deeploy/Targets/PULPOpen/Deployer.py

Lines changed: 19 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@
1515
from Deeploy.DeeployTypes import ConstantBuffer, DeploymentPlatform, NodeTemplate, TopologyOptimizer, VariableBuffer
1616
from Deeploy.Targets.Generic.TopologyOptimizationPasses.Passes import ReshapeConstOptPass, TransposeConstOptPass, \
1717
TransposeMergePass, TransposeNoPermOptPass, TransposeSplitPass
18+
from Deeploy.Targets.PULPOpen.Platform import PULPClusterEngine
1819
from Deeploy.Targets.PULPOpen.TopologyOptimizationPasses.Passes import RQAddTransposeSquashPass
1920

2021
_L3AllocTemplate = NodeTemplate("""
@@ -63,19 +64,32 @@ def __init__(self,
6364

6465
self.extNameCount = 0
6566

66-
def bind(self):
67+
def annotateNCores(self) -> None:
68+
for layer in self.layerBinding.values():
69+
node = layer.node
70+
engine = self._selectEngine(node)
71+
opRepr = layer.mapper.parser.operatorRepresentation
72+
if isinstance(engine, PULPClusterEngine):
73+
opRepr["n_cores"] = engine.n_cores
74+
75+
def bind(self) -> bool:
6776
# SCHEREMO: THIS IS A STOP GAP SOLUTION. DONT REUSE. I MEAN IT. I WILL FIND YOU.
6877
# SCHEREMO: The BindingOptimizationPass system is fairly fragile;
6978
# it was designed this way because implementing further topology optimizations after
7079
# parsing is very involved. If there are further use-cases, we should consider making this effort,
7180
# but if there is only very few cases, this solution is okay.
7281
autoTransposePass = AutoTransposeMergePass()
7382
#self.ctxt, self.layerBinding = autoTransposePass.apply(self.ctxt, self.graph, self.layerBinding)
83+
84+
# LMACAN: THIS IS A STOP GAP SOLUTION. DONT REUSE. I MEAN IT. I WILL FIND YOU.
85+
self.annotateNCores()
86+
7487
# SCHEREMO: THIS IS A STOP GAP SOLUTION. DONT REUSE. I MEAN IT. I WILL FIND YOU.
75-
ret = super().bind()
76-
if ret:
77-
self.ctxt.hoistGlobalDefinition("cluster_dev", "extern struct pi_device cluster_dev;")
78-
return ret
88+
if not super().bind():
89+
return False
90+
91+
self.ctxt.hoistGlobalDefinition("cluster_dev", "extern struct pi_device cluster_dev;")
92+
return True
7993

8094
def _l3ConstBuffer(self) -> List[VariableBuffer]:
8195
return [

0 commit comments

Comments
 (0)