-
Notifications
You must be signed in to change notification settings - Fork 22
Support for 1D Autoencoder #98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: devel
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make sure to remove unnecessary changes from the PR.
class ConvTransposeChecker(SignPropTypeChecker): | ||
|
||
def __init__(self, input_types: Sequence[Type[Pointer]], output_types: Sequence[Type[Pointer]]): | ||
super().__init__(input_types, output_types) | ||
|
||
def _inferNumLevels(self, inputs: List[VariableBuffer], | ||
operatorRepresentation: OperatorRepresentation) -> List[int]: | ||
# Like ConvChecker: n_levels = kernel_size * weight_levels * ch_in * 2^bit_input | ||
weight = inputs[1] | ||
return [ | ||
np.prod(operatorRepresentation['kernel_shape']) * weight.nLevels * weight.shape[1] * # ch_im_in | ||
2**(self.input_types[0].referencedType.typeWidth) | ||
] | ||
|
||
def _inferSignedness(self, inputs: List[VariableBuffer], | ||
operatorRepresentation: OperatorRepresentation) -> List[bool]: | ||
# Output is signed if input is signed | ||
return [inputs[0]._signed] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not check the ConvChecker
, but why do we need a separate ConvTransposeChecker
? If possible, please avoid code duplication. If the difference is minimal it might make sense to inherit and extend from the ConvChecker
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, ConvChecker and ConvTransposeChecker are identical, so I will remove this new one in the next commit.
Hi, as @diaconuccalin commented already, when rebasing your work on the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like most of my comments from the previous review were not addressed. Please make sure to address them! It helps if you reply to the comment to acknowledge the change and, optionally, even link a commit with the change.
Please also remove all unnecessary new line changes unless they are strictly required for the formatting to pass.
📝 WalkthroughSummary by CodeRabbitRelease Notes
WalkthroughAdds float32 1D support and Autoencoder1D CI test by introducing BatchNormalization, ConvTranspose, MaxPool1D kernels and templates, 1D float Conv/Pad/MaxPool templates, parsers, type-checkers, bindings, layers, and platform mappings; and implements corresponding C kernel sources and headers. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant ONNX as ONNX Graph
participant Parser as NodeParser
participant Mapper as NodeMapper
participant Binding as NodeBinding
participant Template as NodeTemplate
participant Kernel as C Kernel
note right of Parser #f0f4ff: new 1D/transpose/pool/batchnorm parsers
ONNX->>Parser: parse node + attrs
Parser->>Mapper: produce context & operatorRepresentation
Mapper->>Binding: select binding(s) (type-driven)
Binding->>Template: alignToContext / instantiate template
Template->>Kernel: emits kernel call (BatchNorm/Conv1d/ConvTranspose/MaxPool1d)
Kernel-->>Template: runtime contract (sign/shape/stride metadata)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (3)
🧰 Additional context used🧬 Code graph analysis (2)Deeploy/Targets/Generic/Bindings.py (4)
Deeploy/Targets/Generic/TypeCheckers.py (3)
🪛 Ruff (0.14.1)Deeploy/Targets/Generic/TypeCheckers.py606-606: Unused method argument: (ARG002) 607-607: Unused method argument: (ARG002) 610-610: Unused method argument: (ARG002) 611-611: Unused method argument: (ARG002) 🔇 Additional comments (7)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 15
♻️ Duplicate comments (3)
CHANGELOG.md (1)
31-31
: PR reference placement — OK.Entry added once under v0.2.0; prior duplication concern appears resolved.
TargetLibraries/Generic/src/BatchNorm_fp32.c (1)
8-9
: OpenMP usage: keep but guard for portability.Same concern noted earlier; guarding preserves portability while enabling speedups when available.
Apply this diff:
-#pragma omp parallel for +#ifdef _OPENMP +#pragma omp parallel for +#endifDeeploy/Targets/Generic/Bindings.py (1)
321-335
: ConvTranspose: duplicate bindings likely unnecessaryYou define both 3-input and 2-input float bindings. If the template accepts bias=NULL, keep only the 3-input variant for simplicity.
If both are needed for template signature matching, add a brief comment explaining why two variants are required.
🧹 Nitpick comments (21)
CHANGELOG.md (2)
98-104
: Added items match PR scope — OK.Wording nit: consider “in these kernels” instead of “in the previous kernels.”
-- Support for float32 data type in the previous kernels +- Support for float32 data type in these kernels
238-239
: Minor formatting/grammar polish in “Changed”.-- FloatConvTemplate file -- Platform.py file +- FloatConvTemplate +- Platform.pyTargetLibraries/Generic/inc/DeeployBasicMath.h (1)
59-60
: Avoid circular include via MaxPool1d.h.Including
kernel/MaxPool1d.h
here while that header includesDeeployBasicMath.h
creates an unnecessary include cycle. Please makeMaxPool1d.h
include only the minimal deps (stdint.h
, projecttypes.h
) and dropDeeployBasicMath.h
. I’ve proposed the change in that header’s review comment.Also applies to: 70-70, 75-75
TargetLibraries/Generic/inc/kernel/MaxPool1d.h (1)
1-10
: Trim dependencies and add C++-safe linkage to avoid include cycle.Remove
DeeployBasicMath.h
include; include only what’s needed and addextern "C"
guards.#ifndef __DEEPLOY_BASIC_MATH_MAXPOOL1D_KERNEL_HEADER_ #define __DEEPLOY_BASIC_MATH_MAXPOOL1D_KERNEL_HEADER_ -#include "DeeployBasicMath.h" +#include <stdint.h> +#include "types.h" + +#ifdef __cplusplus +extern "C" { +#endif void MaxPool1d_fp32_fp32(float32_t const *__restrict__ pSrcA, uint32_t C, uint32_t W, uint32_t K, uint32_t S, float32_t *__restrict__ pDstC); +#ifdef __cplusplus +} +#endif #endifBuild once after this change to ensure no transitive type deps were assumed.
Deeploy/Targets/Generic/Templates/BatchNormalizationTemplate.py (1)
3-11
: Confirm BN kernel signature and epsilon handling.Template assumes
BatchNorm_fp32(input, scale, bias, mean, variance, output, batch, channels, window)
and noepsilon
. Verify it matcheskernel/BatchNorm.h
and that numerical stability (epsilon
) is handled in-kernel or via preprocessed variance.I can adjust the template once you confirm the exact signature/order.
TargetLibraries/Generic/src/Convolution_fp32.c (2)
94-102
: Clarify/lock the shape contract (W_out, padding).This kernel assumes no explicit padding parameter and does boundary checks. Please confirm callers pre-pad (Pad1D) or compute W_out = floor((W_in - K)/stride) + 1 for “valid” so we never accumulate partial windows. If “same” is desired, pre-pad and keep this branch, otherwise drop the check for tighter inner loops.
103-121
: Hoist invariant indexing and bias to reduce work in hot loops.Avoid recomputing base offsets every iteration.
Apply this diff:
for (c_out = 0; c_out < C_out; ++c_out) { - for (w_out = 0; w_out < W_out; ++w_out) { + const float32_t bias = has_bias ? pSrcBias[c_out] : 0.0f; + const float32_t *w_base = pSrcB + c_out * C_in * K; + for (w_out = 0; w_out < W_out; ++w_out) { float32_t sum = 0.0f; for (c_in = 0; c_in < C_in; ++c_in) { - for (k = 0; k < K; ++k) { + const float32_t *w_row = w_base + c_in * K; + for (k = 0; k < K; ++k) { w_in = w_out * stride + k; if (w_in < W_in) { - sum += pSrcA[c_in * W_in + w_in] * - pSrcB[c_out * C_in * K + c_in * K + k]; + sum += pSrcA[c_in * W_in + w_in] * w_row[k]; } } } - if (has_bias) { - sum += pSrcBias[c_out]; - } - pDstC[c_out * W_out + w_out] = sum; + pDstC[c_out * W_out + w_out] = sum + bias; } }TargetLibraries/Generic/inc/kernel/BatchNorm.h (1)
7-10
: Consider unsigned dims for consistency with other kernels.Other kernels use uint32_t for dimensions; aligning N/C/L would avoid sign/overflow mismatches.
Option (header):
-void BatchNorm_fp32(const float32_t *input, const float32_t *gamma, - const float32_t *beta, const float32_t *mean, - const float32_t *var, float32_t *output, int N, int C, - int L); +void BatchNorm_fp32(const float32_t *input, const float32_t *gamma, + const float32_t *beta, const float32_t *mean, + const float32_t *var, float32_t *output, + uint32_t N, uint32_t C, uint32_t L);(Requires mirroring in the .c file.)
TargetLibraries/Generic/src/BatchNorm_fp32.c (1)
7-7
: Minor type consistency.Use float32_t for epsilon to match the rest.
Apply this diff:
- const float epsilon = 1e-5f; + const float32_t epsilon = 1e-5f;TargetLibraries/Generic/src/MaxPool1D_fp32.c (1)
11-19
: Tighten inner loop (optional).With the guard above, w_in is always < W; drop the branch and use fmaxf.
Apply this diff:
- for (uint32_t k = 0; k < K; ++k) { - uint32_t w_in = w_out * S + k; - if (w_in >= W) - continue; - float32_t tmp = pSrcA[c * W + w_in]; - if (tmp > max) { - max = tmp; - } - } + for (uint32_t k = 0; k < K; ++k) { + uint32_t w_in = w_out * S + k; + float32_t tmp = pSrcA[c * W + w_in]; + max = fmaxf(max, tmp); + }Deeploy/Targets/Generic/TypeCheckers.py (1)
624-636
: Silence ARG002 warnings by prefixing unused params with underscores.Keeps intent clear and appeases Ruff.
- def _inferNumLevels(self, inputs: List[VariableBuffer], - operatorRepresentation: OperatorRepresentation) -> List[int]: + def _inferNumLevels(self, _inputs: List[VariableBuffer], + _operatorRepresentation: OperatorRepresentation) -> List[int]: return [2**(self.input_types[0].referencedType.typeWidth)] - def _inferSignedness(self, inputs: List[VariableBuffer], - operatorRepresentation: OperatorRepresentation) -> List[bool]: + def _inferSignedness(self, _inputs: List[VariableBuffer], + _operatorRepresentation: OperatorRepresentation) -> List[bool]: return [True]TargetLibraries/Generic/src/ConvTranspose1d_fp32.c (3)
2-2
: Include the kernel’s own header to enforce signature consistency.Including the header here catches prototype drift at compile time.
-#include "DeeployBasicMath.h" +#include "kernel/ConvTranspose1d_fp32.h"
16-21
: Optional: zero-init via memset for speed.A flat memset is typically faster than nested loops.
- // Output initialization - for (uint32_t c = 0; c < C_out; ++c) { - for (uint32_t w = 0; w < W_out; ++w) { - output[c * W_out + w] = 0.0f; - } - } + // Output initialization + memset(output, 0, sizeof(float32_t) * C_out * W_out);
31-33
: Guard against invalid stride.Defensive check avoids UB if stride == 0 from upstream.
- uint32_t w_out = w_in * stride + k; + // assume stride >= 1 + if (__builtin_expect(stride == 0, 0)) continue; + uint32_t w_out = w_in * stride + k;Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py (2)
37-40
: Fix weight layout comment to match implementation.Kernel and Layers assume weight shape [C_in, C_out//group, K].
- operatorRepresentation["dim_kernel_y"] = weight.shape[2] # Shape: [C_out, C_in, K] + operatorRepresentation["dim_kernel_y"] = weight.shape[2] # Shape: [C_in, C_out//group, K]
31-36
: Align dim keys with existing 1D conventions or provide aliases.Elsewhere 1D often uses dim_im_*_x. Consider mirroring keys to avoid downstream KeyErrors.
operatorRepresentation["ch_im_out"] = data_out.shape[1] operatorRepresentation["dim_im_out_y"] = data_out.shape[2] + # Provide x-aliases for 1D flows expecting *_x + operatorRepresentation["dim_im_in_x"] = operatorRepresentation["dim_im_in_y"] + operatorRepresentation["dim_im_out_x"] = operatorRepresentation["dim_im_out_y"]Deeploy/Targets/Generic/Parsers.py (4)
367-372
: Narrow the exception and drop unused variableCatching bare Exception and binding e (unused) triggers lints and may mask issues.
- try: - self.operatorRepresentation['pads'] = [int(p) for p in node.attrs['pads']] - except Exception as e: - self.operatorRepresentation['pads'] = node.attrs['pads'] + try: + self.operatorRepresentation['pads'] = [int(p) for p in node.attrs['pads']] + except (TypeError, ValueError): + self.operatorRepresentation['pads'] = node.attrs['pads']
1381-1403
: Remove duplicated batchOffset assignmentsbatchOffsetIn/Out are computed twice.
self.operatorRepresentation['batch'] = data_in.shape[0] self.operatorRepresentation['dim_im_in_x'] = 1 self.operatorRepresentation[ 'dim_im_out_x'] = 1 # necessary since we use the same Convlayer for all convolutions @@ else: self.operatorRepresentation['ch_im_in'] = data_in.shape[2] self.operatorRepresentation['dim_im_in_y'] = data_in.shape[1] self.operatorRepresentation['ch_im_out'] = data_out.shape[2] self.operatorRepresentation['dim_im_out_y'] = data_out.shape[1] self.operatorRepresentation[ 'batchOffsetIn'] = self.operatorRepresentation['ch_im_in'] * self.operatorRepresentation['dim_im_in_y'] self.operatorRepresentation['batchOffsetOut'] = self.operatorRepresentation[ 'ch_im_out'] * self.operatorRepresentation['dim_im_out_y'] - self.operatorRepresentation[ - 'batchOffsetIn'] = self.operatorRepresentation['ch_im_in'] * self.operatorRepresentation['dim_im_in_y'] - self.operatorRepresentation['batchOffsetOut'] = self.operatorRepresentation[ - 'ch_im_out'] * self.operatorRepresentation['dim_im_out_y']
2199-2211
: Unify has_bias type and avoid verbose error messageCurrent code mixes 1D vs 2D bias flags and raises a long IndexError.
- Align with 2D parsers by using "true"/"false" strings (or booleans consistently across templates).
- Shorten the error message.
- if len(node.inputs) > 2: + if len(node.inputs) > 2: inputs.append("bias") - self.operatorRepresentation["has_bias"] = 1 + self.operatorRepresentation["has_bias"] = "true" else: - self.operatorRepresentation["has_bias"] = 0 + self.operatorRepresentation["has_bias"] = "false" self.operatorRepresentation["bias"] = "NULL" @@ - if idx >= len(inputs): - raise IndexError( - f"Index {idx} out of range for inputs of length {len(inputs)} in node {inputNode.name}") + if idx >= len(inputs): + raise IndexError("input index out of range")Please confirm templates consume has_bias as "true"/"false" (strings) everywhere; otherwise prefer booleans and adjust templates.
2694-2743
: ConvTranspose base ctxt: minor cleanups and consistency
- channels_first is unused; either use it or prefix with underscore.
- kernel_shape_y is computed but unused.
- Prefer ctxt.lookup(...).name to stay consistent with other parsers (handles mangling/hoisting).
- def parseNodeCtxt(self, ctxt: NetworkContext, node: gs.Node, channels_first: bool = True): + def parseNodeCtxt(self, ctxt: NetworkContext, node: gs.Node, channels_first: bool = True): @@ - self.operatorRepresentation['data_in'] = node.inputs[0].name - self.operatorRepresentation['weight'] = node.inputs[1].name - self.operatorRepresentation['data_out'] = node.outputs[0].name + self.operatorRepresentation['data_in'] = ctxt.lookup(node.inputs[0].name).name + self.operatorRepresentation['weight'] = ctxt.lookup(node.inputs[1].name).name + self.operatorRepresentation['data_out'] = ctxt.lookup(node.outputs[0].name).name @@ - kernel_shape_y = kernel_shape[1] if len(kernel_shape) > 1 else kernel_shape_x + # kernel_shape_y only relevant for 2D; compute when needed + if len(kernel_shape) > 1: + _ = kernel_shape[1]If channels_last is expected anywhere for ConvTranspose, we should branch index computation by channels_first here.
Deeploy/Targets/Generic/Bindings.py (1)
81-88
: Float Conv1D binding: bias handling consistencyBinding requires 3 pointers; parser sets bias="NULL" when absent. Ensure FloatConvTemplate handles NULL bias without expecting a buffer; otherwise add a 2-input float binding similar to ConvTranspose or hoist zero-bias in parser.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (23)
.github/workflows/CI.yml
(1 hunks)CHANGELOG.md
(3 hunks)Deeploy/CommonExtensions/TypeCheckers/SignPropTypeChecker.py
(1 hunks)Deeploy/Targets/Generic/Bindings.py
(5 hunks)Deeploy/Targets/Generic/Layers.py
(1 hunks)Deeploy/Targets/Generic/Parsers.py
(9 hunks)Deeploy/Targets/Generic/Platform.py
(5 hunks)Deeploy/Targets/Generic/Templates/BatchNormalizationTemplate.py
(1 hunks)Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py
(1 hunks)Deeploy/Targets/Generic/Templates/FloatConvTemplate.py
(1 hunks)Deeploy/Targets/Generic/Templates/FloatMaxPoolTemplate.py
(1 hunks)Deeploy/Targets/Generic/Templates/FloatPadTemplate.py
(1 hunks)Deeploy/Targets/Generic/TypeCheckers.py
(1 hunks)Deeploy/Targets/MemPool/Platform.py
(2 hunks)TargetLibraries/Generic/inc/DeeployBasicMath.h
(3 hunks)TargetLibraries/Generic/inc/kernel/BatchNorm.h
(1 hunks)TargetLibraries/Generic/inc/kernel/ConvTranspose1d_fp32.h
(1 hunks)TargetLibraries/Generic/inc/kernel/Convolution.h
(1 hunks)TargetLibraries/Generic/inc/kernel/MaxPool1d.h
(1 hunks)TargetLibraries/Generic/src/BatchNorm_fp32.c
(1 hunks)TargetLibraries/Generic/src/ConvTranspose1d_fp32.c
(1 hunks)TargetLibraries/Generic/src/Convolution_fp32.c
(1 hunks)TargetLibraries/Generic/src/MaxPool1D_fp32.c
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (16)
Deeploy/Targets/Generic/Templates/BatchNormalizationTemplate.py (1)
Deeploy/DeeployTypes.py (1)
NodeTemplate
(105-246)
TargetLibraries/Generic/inc/kernel/Convolution.h (1)
TargetLibraries/Generic/src/Convolution_fp32.c (1)
Conv1d_fp32_fp32_fp32
(94-121)
Deeploy/Targets/Generic/Templates/FloatConvTemplate.py (1)
Deeploy/DeeployTypes.py (1)
NodeTemplate
(105-246)
TargetLibraries/Generic/inc/kernel/ConvTranspose1d_fp32.h (1)
TargetLibraries/Generic/src/ConvTranspose1d_fp32.c (1)
ConvTranspose1d_fp32
(4-47)
TargetLibraries/Generic/inc/kernel/MaxPool1d.h (1)
TargetLibraries/Generic/src/MaxPool1D_fp32.c (1)
MaxPool1d_fp32_fp32
(4-23)
TargetLibraries/Generic/inc/kernel/BatchNorm.h (1)
TargetLibraries/Generic/src/BatchNorm_fp32.c (1)
BatchNorm_fp32
(3-24)
Deeploy/Targets/Generic/Templates/FloatMaxPoolTemplate.py (1)
Deeploy/DeeployTypes.py (1)
NodeTemplate
(105-246)
Deeploy/Targets/Generic/TypeCheckers.py (3)
Deeploy/CommonExtensions/TypeCheckers/SignPropTypeChecker.py (3)
SignPropTypeChecker
(33-74)_inferNumLevels
(35-37)_inferSignedness
(39-41)Deeploy/AbstractDataTypes.py (2)
Pointer
(314-395)typeWidth
(406-407)Deeploy/DeeployTypes.py (2)
inputs
(2447-2467)VariableBuffer
(249-397)
Deeploy/Targets/Generic/Templates/FloatPadTemplate.py (1)
Deeploy/DeeployTypes.py (1)
NodeTemplate
(105-246)
Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py (2)
Deeploy/DeeployTypes.py (4)
NetworkContext
(556-1021)NodeTemplate
(105-246)alignToContext
(136-156)lookup
(748-780)Deeploy/Targets/Generic/Templates/ConvTemplate.py (1)
alignToContext
(36-49)
Deeploy/CommonExtensions/TypeCheckers/SignPropTypeChecker.py (1)
Deeploy/DeeployTypes.py (1)
outputs
(2469-2490)
Deeploy/Targets/Generic/Layers.py (1)
Deeploy/DeeployTypes.py (4)
ONNXLayer
(1796-2112)NodeMapper
(1650-1793)computeOps
(1808-1813)computeShapes
(1817-1839)
Deeploy/Targets/Generic/Bindings.py (4)
Deeploy/Targets/Generic/TypeCheckers.py (4)
BatchNormChecker
(624-635)ConvChecker
(479-497)MaxPoolChecker
(462-476)PadChecker
(94-108)Deeploy/DeeployTypes.py (1)
NodeBinding
(1509-1647)Deeploy/AbstractDataTypes.py (1)
PointerClass
(543-566)Deeploy/CommonExtensions/DataTypes.py (2)
int8_t
(31-34)int32_t
(43-46)
Deeploy/Targets/Generic/Platform.py (3)
Deeploy/Targets/Generic/Layers.py (3)
BatchNormalizationLayer
(646-656)ConvTransposeLayer
(659-707)TransposeLayer
(464-467)Deeploy/Targets/Generic/Parsers.py (4)
BatchNormParser
(2629-2665)ConvTranspose1DParser
(2746-2792)MaxPool1DParser
(248-287)TransposeParser
(182-209)Deeploy/DeeployTypes.py (1)
NodeMapper
(1650-1793)
Deeploy/Targets/Generic/Parsers.py (1)
Deeploy/DeeployTypes.py (7)
parseNode
(1034-1049)parseNodeCtxt
(1052-1077)lookup
(748-780)inputs
(2447-2467)NodeParser
(1024-1199)outputs
(2469-2490)NetworkContext
(556-1021)
Deeploy/Targets/MemPool/Platform.py (2)
Deeploy/DeeployTypes.py (1)
NodeMapper
(1650-1793)Deeploy/Targets/Generic/Parsers.py (1)
GenericConv1DParser
(2174-2213)
🪛 LanguageTool
CHANGELOG.md
[grammar] ~238-~238: There might be a mistake here.
Context: ...L. ### Changed - FloatConvTemplate file - Platform.py file - Bump the CMake vers...
(QB_NEW_EN)
[grammar] ~239-~239: There might be a mistake here.
Context: ...loatConvTemplate file - Platform.py file - Bump the CMake version to 3.24 as requir...
(QB_NEW_EN)
🪛 Ruff (0.12.2)
Deeploy/Targets/Generic/TypeCheckers.py
629-629: Unused method argument: inputs
(ARG002)
630-630: Unused method argument: operatorRepresentation
(ARG002)
633-633: Unused method argument: inputs
(ARG002)
634-634: Unused method argument: operatorRepresentation
(ARG002)
Deeploy/CommonExtensions/TypeCheckers/SignPropTypeChecker.py
70-70: zip()
without an explicit strict=
parameter
Add explicit value for parameter strict=
(B905)
Deeploy/Targets/Generic/Parsers.py
369-369: Do not catch blind exception: Exception
(BLE001)
369-369: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
2208-2209: Avoid specifying long messages outside the exception class
(TRY003)
2649-2649: Unused method argument: channels_first
(ARG002)
2694-2694: Unused method argument: channels_first
(ARG002)
2727-2727: Local variable kernel_shape_y
is assigned to but never used
Remove assignment to unused variable kernel_shape_y
(F841)
🔇 Additional comments (15)
TargetLibraries/Generic/inc/kernel/Convolution.h (1)
70-77
: API looks good; confirm template computes W_out consistently.New Conv1d prototype aligns with implementation. Please confirm codegen computes W_out per chosen padding/stride semantics.
Deeploy/Targets/Generic/Layers.py (1)
646-657
: Ops model for BatchNorm is reasonable.5 ops/element is a fine approximation for scheduling/estimation.
Deeploy/Targets/Generic/Parsers.py (3)
248-287
: MaxPool1D parser looks goodAttribute validation and IO shape handling for 1D are correct.
450-462
: Pad1D: schema checks and ctxt mapping are fineConstraints (len=6, zeroed batch/channel pads) and channels_first/last handling look consistent; x=1 sentinel is appropriate for 1D.
Also applies to: 469-491
2773-2792
: Remove Y dimension for 1D ConvTranspose to avoid double-counting
Superclass parse setsdim_im_out_x
; drop thedim_im_out_y
assignment and updatebatchOffsetOut
to usedim_im_out_x
:@@ -2777,8 +2777,8 @@ def parseNodeCtxt(...): - self.operatorRepresentation['dim_im_out_y'] = out_shape[2] - self.operatorRepresentation["batchOffsetOut"] = self.operatorRepresentation["ch_im_out"] * self.operatorRepresentation["dim_im_out_y"] + # dim_im_out_x already set by base; remove Y dim for 1D + self.operatorRepresentation["batchOffsetOut"] = self.operatorRepresentation["ch_im_out"] * self.operatorRepresentation["dim_im_out_x"]Re-run FLOP/ops accounting for the 1D Autoencoder to confirm the corrected (lower) ops counts.
Deeploy/Targets/Generic/Platform.py (5)
33-52
: Imports updated correctly for new opsBindings/Layers/Parsers imports align with added features.
58-61
: Conv1D mapper switch to multi-binding list is OKMoving to BasicConv1DBindings matches new float+int8 support.
77-78
: MaxPool now aggregates 1D and 2DMapper addition looks good.
94-96
: New mappers registeredBatchNorm and ConvTranspose mappers are wired as expected.
Is 2D ConvTranspose intentionally unsupported on Generic? If yes, consider naming ConvTranspose1D in mapping comment to avoid confusion.
123-141
: GenericMapping entries: confirm operator precedenceOrder places 1D/2D Conv and DWConv together and MaxPool with 1D first; fine if type/checkers disambiguate. New BatchNormalization and ConvTranspose entries look consistent.
Deeploy/Targets/Generic/Bindings.py (5)
38-47
: Template imports are consistentNew Float/BN/ConvTranspose templates added correctly.
47-52
: TypeChecker imports are consistentBatchNormChecker inclusion matches new BN bindings.
181-184
: MaxPool1D float binding LGTMMapped to FloatMaxPoolTemplate.reference1DTemplate with MaxPoolChecker.
203-210
: Pad1D float path LGTMExtends to FloatPadTemplate 1D; consistent with 2D path.
310-319
: BatchNorm bindings LGTM5-input float binding with BatchNormChecker to BatchNormalizationTemplate is correct.
.github/workflows/CI.yml
Outdated
miniMobileNetv2 | ||
CCT/CCT_1_16_16_8 | ||
testFloatDemoTinyViT | ||
Autoencoder1D |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Autoencoder1D added to Generic models — looks good.
Please verify the test target exists to avoid “test not found” failures in the reusable workflow.
🏁 Script executed:
#!/bin/bash
# Expect: a test directory or config named Autoencoder1D under DeeployTest/Tests
fd -t d -H ^Autoencoder1D$ DeeployTest/Tests || true
rg -nP '^name\s*:\s*Autoencoder1D\b' DeeployTest/Tests -S -g '!**/third_party/**' || true
Length of output: 172
Add missing test target for Autoencoder1D
Directory DeeployTest/Tests/Autoencoder1D/
exists, but no test configuration with name: Autoencoder1D
was found. Add or update the test manifest (e.g., YAML in that directory) to include name: Autoencoder1D
so the reusable CI workflow can locate and run this test.
🤖 Prompt for AI Agents
.github/workflows/CI.yml around line 135: the CI references a test target
"Autoencoder1D" but no test manifest in DeeployTest/Tests/Autoencoder1D/
contains name: Autoencoder1D; add or update the test manifest YAML in that
directory to include a top-level "name: Autoencoder1D" (and any required fields
for the reusable workflow) so the reusable CI workflow can discover and run the
test.
class BatchNormParser(NodeParser): | ||
|
||
def __init__(self): | ||
super().__init__() | ||
|
||
def parseNode(self, node: gs.Node) -> bool: | ||
# Verify the attributes (epsilon is mandatory, momentum and training_mode are optional) | ||
if 'epsilon' not in node.attrs: | ||
return False | ||
# Common Inputs: 5 (X, scale, B, mean, var) | ||
if len(node.inputs) < 5: | ||
return False | ||
|
||
# Save the attributes, default values are provided if not present | ||
self.operatorRepresentation['epsilon'] = node.attrs.get('epsilon', 1e-5) | ||
self.operatorRepresentation['momentum'] = node.attrs.get('momentum', 0.9) | ||
self.operatorRepresentation['training_mode'] = node.attrs.get('training_mode', 0) | ||
|
||
return True | ||
|
||
def parseNodeCtxt(self, ctxt, node: gs.Node, channels_first: bool = True): | ||
inputs = ['data_in', 'scale', 'bias', 'mean', 'variance'] | ||
outputs = ['data_out'] | ||
|
||
for idx, inputNode in enumerate(node.inputs[:5]): | ||
self.operatorRepresentation[inputs[idx]] = ctxt.lookup(inputNode.name).name | ||
|
||
# Output (Y) | ||
self.operatorRepresentation[outputs[0]] = ctxt.lookup(node.outputs[0].name).name | ||
|
||
input_shape = ctxt.lookup(node.inputs[0].name).shape | ||
# Save input shape information | ||
self.operatorRepresentation['batch_size'] = input_shape[0] | ||
self.operatorRepresentation['channel_size'] = input_shape[1] | ||
self.operatorRepresentation['window_size'] = input_shape[2] | ||
|
||
return ctxt, True | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
BatchNorm: respect channels_first and validate rank
channels_first is ignored; shapes are assumed [N, C, W]. Handle both layouts and assert rank 3.
- def parseNodeCtxt(self, ctxt, node: gs.Node, channels_first: bool = True):
+ def parseNodeCtxt(self, ctxt, node: gs.Node, channels_first: bool = True):
@@
- input_shape = ctxt.lookup(node.inputs[0].name).shape
- # Save input shape information
- self.operatorRepresentation['batch_size'] = input_shape[0]
- self.operatorRepresentation['channel_size'] = input_shape[1]
- self.operatorRepresentation['window_size'] = input_shape[2]
+ input_shape = ctxt.lookup(node.inputs[0].name).shape
+ assert len(input_shape) == 3, f"BatchNorm expects 3D input, got {len(input_shape)}D"
+ self.operatorRepresentation['batch_size'] = input_shape[0]
+ if channels_first:
+ self.operatorRepresentation['channel_size'] = input_shape[1]
+ self.operatorRepresentation['window_size'] = input_shape[2]
+ else:
+ self.operatorRepresentation['channel_size'] = input_shape[2]
+ self.operatorRepresentation['window_size'] = input_shape[1]
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
class BatchNormParser(NodeParser): | |
def __init__(self): | |
super().__init__() | |
def parseNode(self, node: gs.Node) -> bool: | |
# Verify the attributes (epsilon is mandatory, momentum and training_mode are optional) | |
if 'epsilon' not in node.attrs: | |
return False | |
# Common Inputs: 5 (X, scale, B, mean, var) | |
if len(node.inputs) < 5: | |
return False | |
# Save the attributes, default values are provided if not present | |
self.operatorRepresentation['epsilon'] = node.attrs.get('epsilon', 1e-5) | |
self.operatorRepresentation['momentum'] = node.attrs.get('momentum', 0.9) | |
self.operatorRepresentation['training_mode'] = node.attrs.get('training_mode', 0) | |
return True | |
def parseNodeCtxt(self, ctxt, node: gs.Node, channels_first: bool = True): | |
inputs = ['data_in', 'scale', 'bias', 'mean', 'variance'] | |
outputs = ['data_out'] | |
for idx, inputNode in enumerate(node.inputs[:5]): | |
self.operatorRepresentation[inputs[idx]] = ctxt.lookup(inputNode.name).name | |
# Output (Y) | |
self.operatorRepresentation[outputs[0]] = ctxt.lookup(node.outputs[0].name).name | |
input_shape = ctxt.lookup(node.inputs[0].name).shape | |
# Save input shape information | |
self.operatorRepresentation['batch_size'] = input_shape[0] | |
self.operatorRepresentation['channel_size'] = input_shape[1] | |
self.operatorRepresentation['window_size'] = input_shape[2] | |
return ctxt, True | |
def parseNodeCtxt(self, ctxt, node: gs.Node, channels_first: bool = True): | |
inputs = ['data_in', 'scale', 'bias', 'mean', 'variance'] | |
outputs = ['data_out'] | |
for idx, inputNode in enumerate(node.inputs[:5]): | |
self.operatorRepresentation[inputs[idx]] = ctxt.lookup(inputNode.name).name | |
# Output (Y) | |
self.operatorRepresentation[outputs[0]] = ctxt.lookup(node.outputs[0].name).name | |
- input_shape = ctxt.lookup(node.inputs[0].name).shape | |
- # Save input shape information | |
- self.operatorRepresentation['batch_size'] = input_shape[0] | |
- self.operatorRepresentation['channel_size'] = input_shape[1] | |
input_shape = ctxt.lookup(node.inputs[0].name).shape | |
# Ensure a 3D input and respect the layout flag | |
assert len(input_shape) == 3, f"BatchNorm expects 3D input, got {len(input_shape)}D" | |
self.operatorRepresentation['batch_size'] = input_shape[0] | |
if channels_first: | |
self.operatorRepresentation['channel_size'] = input_shape[1] | |
self.operatorRepresentation['window_size'] = input_shape[2] | |
else: | |
self.operatorRepresentation['channel_size'] = input_shape[2] | |
self.operatorRepresentation['window_size'] = input_shape[1] | |
return ctxt, True |
🧰 Tools
🪛 Ruff (0.12.2)
2649-2649: Unused method argument: channels_first
(ARG002)
🤖 Prompt for AI Agents
In Deeploy/Targets/Generic/Parsers.py around lines 2629 to 2666, the
BatchNormParser currently ignores the channels_first flag and unconditionally
assumes input shape [N,C,W]; update parseNodeCtxt to first validate the input
tensor rank is exactly 3 (if shape is None or len(shape) != 3, return False),
then respect channels_first when extracting sizes: if channels_first is True use
batch=shape[0], channel=shape[1], window=shape[2]; if False use batch=shape[0],
channel=shape[2], window=shape[1]; keep existing entries 'batch_size',
'channel_size', 'window_size' and return ctxt, True.
GenericConv1D_Mapper = NodeMapper(GenericConv1DParser(), [BasicConv1DBindings]) | ||
GenericDWConv1D_Mapper = NodeMapper(GenericDWConv1DParser(), [BasicDWConv1DBinding]) | ||
GenericConv2D_Mapper = NodeMapper(GenericConv2DParser(), BasicConv2DBindings) | ||
GenericDWConv2D_Mapper = NodeMapper(GenericDWConv2DParser(), BasicDWConv2DBindings) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Incorrect bindings container type passed to NodeMapper.
BasicConv1DBindings
is already a list; wrapping it in [...]
makes self.bindings
nested and will fail at binder.typeCheck
.
-GenericConv1D_Mapper = NodeMapper(GenericConv1DParser(), [BasicConv1DBindings])
+GenericConv1D_Mapper = NodeMapper(GenericConv1DParser(), BasicConv1DBindings)
Also matches the 2D case below for BasicConv2DBindings
.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
GenericConv1D_Mapper = NodeMapper(GenericConv1DParser(), [BasicConv1DBindings]) | |
GenericDWConv1D_Mapper = NodeMapper(GenericDWConv1DParser(), [BasicDWConv1DBinding]) | |
GenericConv2D_Mapper = NodeMapper(GenericConv2DParser(), BasicConv2DBindings) | |
GenericDWConv2D_Mapper = NodeMapper(GenericDWConv2DParser(), BasicDWConv2DBindings) | |
GenericConv1D_Mapper = NodeMapper(GenericConv1DParser(), BasicConv1DBindings) | |
GenericDWConv1D_Mapper = NodeMapper(GenericDWConv1DParser(), [BasicDWConv1DBinding]) | |
GenericConv2D_Mapper = NodeMapper(GenericConv2DParser(), BasicConv2DBindings) | |
GenericDWConv2D_Mapper = NodeMapper(GenericDWConv2DParser(), BasicDWConv2DBindings) |
🤖 Prompt for AI Agents
In Deeploy/Targets/MemPool/Platform.py around lines 63 to 66, the first two
NodeMapper constructions wrap binding lists in an extra list (e.g.,
[BasicConv1DBindings]) which creates a nested list and breaks binder.typeCheck;
remove the outer brackets so GenericConv1D_Mapper receives BasicConv1DBindings
and GenericDWConv1D_Mapper receives BasicDWConv1DBinding directly (matching the
2D cases) ensuring self.bindings is the expected flat list.
#include <stdbool.h> | ||
#include <stdint.h> | ||
|
||
void ConvTranspose1d_fp32(const float32_t *input, uint32_t C_in, uint32_t W_in, | ||
const float32_t *weight, uint32_t C_out, uint32_t K, | ||
uint32_t stride, const float32_t *bias, bool has_bias, | ||
float32_t *output, uint32_t W_out); | ||
|
||
#endif // CONV_TRANSPOSE1D_FP32_H |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Include the FP32 typedef and add C++ guards.
float32_t isn’t provided by stdint.h. Include DeeployBasicMath.h here and add extern "C" for C++ builds.
#include <stdbool.h>
#include <stdint.h>
+#include "DeeployBasicMath.h"
+
+#ifdef __cplusplus
+extern "C" {
+#endif
void ConvTranspose1d_fp32(const float32_t *input, uint32_t C_in, uint32_t W_in,
const float32_t *weight, uint32_t C_out, uint32_t K,
uint32_t stride, const float32_t *bias, bool has_bias,
float32_t *output, uint32_t W_out);
-#endif // CONV_TRANSPOSE1D_FP32_H
+#ifdef __cplusplus
+}
+#endif
+#endif // CONV_TRANSPOSE1D_FP32_H
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
#include <stdbool.h> | |
#include <stdint.h> | |
void ConvTranspose1d_fp32(const float32_t *input, uint32_t C_in, uint32_t W_in, | |
const float32_t *weight, uint32_t C_out, uint32_t K, | |
uint32_t stride, const float32_t *bias, bool has_bias, | |
float32_t *output, uint32_t W_out); | |
#endif // CONV_TRANSPOSE1D_FP32_H | |
#include <stdbool.h> | |
#include <stdint.h> | |
#include "DeeployBasicMath.h" | |
#ifdef __cplusplus | |
extern "C" { | |
#endif | |
void ConvTranspose1d_fp32(const float32_t *input, uint32_t C_in, uint32_t W_in, | |
const float32_t *weight, uint32_t C_out, uint32_t K, | |
uint32_t stride, const float32_t *bias, bool has_bias, | |
float32_t *output, uint32_t W_out); | |
#ifdef __cplusplus | |
} | |
#endif | |
#endif // CONV_TRANSPOSE1D_FP32_H |
🤖 Prompt for AI Agents
In TargetLibraries/Generic/inc/kernel/ConvTranspose1d_fp32.h around lines 4 to
12, float32_t is not defined and the header lacks C++ linkage guards; include
the header that defines float32_t (DeeployBasicMath.h) near the top and wrap the
function declaration with extern "C" guards so C++ compilers get C linkage. Keep
the existing include guards unchanged and ensure the extern "C" block only
applies when __cplusplus is defined.
@Aldrago98, can you rebase your PR and take a look at the unresolved conversations? |
6c1b70b
to
0761293
Compare
Signed-off-by: Aldrago98 <[email protected]>
Signed-off-by: Aldrago98 <[email protected]>
Signed-off-by: Aldrago98 <[email protected]>
Signed-off-by: Aldrago98 <[email protected]>
Signed-off-by: Aldrago98 <[email protected]>
0761293
to
a4af405
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (8)
Deeploy/CommonExtensions/TypeCheckers/SignPropTypeChecker.py (1)
51-66
: Fix undefined variables and enforce strict zip; remove duplicated warnings
- obj and nLevel are referenced in the else-branch but are undefined there; this will crash when signProp is False.
- Warnings are duplicated.
- Guard for None/mismatched lengths and use zip(strict=True) to satisfy Ruff B905 and avoid silent truncation.
Apply:
if nLevels is None or signedness is None: return ctxt - for obj, nLevel, sign in zip(outputs, nLevels, signedness): + if len(nLevels) != len(outputs) or len(signedness) != len(outputs): + raise ValueError("SignPropTypeChecker: inferred lengths must match outputs.") + for obj, nLevel, sign in zip(outputs, nLevels, signedness, strict=True): obj.nLevels = nLevel obj._signed = sign - else: - if issubclass(obj._type.referencedType, IntegerImmediate) and not obj._type.fitsNumLevels(nLevel): - log.warning( - f"{obj.name} has {nLevel} levels, but {obj._type.referencedType.typeName} only supports {obj._type.referencedType.nLevels} levels." - ) - - if issubclass(obj._type.referencedType, IntegerImmediate) and not obj._type.fitsNumLevels(nLevel): - log.warning( - f"{obj.name} has {nLevel} levels, but {obj._type.referencedType.typeName} only supports {obj._type.referencedType.nLevels} levels." - ) + else: + # Inputs missing sign/levels metadata; skip level checks. + passDeeploy/Targets/Generic/Layers.py (2)
668-684
: Harden pixel-count logic and clean commentSupport 1D/2D robustly and avoid Italian comment.
- kernel_shape = np.prod(opRep['kernel_shape']) # es. [3, 3] -> 9 + kernel_shape = np.prod(opRep['kernel_shape']) # e.g., [3, 3] -> 9 @@ - # ConvTranspose upscales spatial dims, quindi num pixel viene da output - if 'dim_im_out_y' in opRep: - numPx = opRep['dim_im_out_x'] * opRep['dim_im_out_y'] - else: - numPx = opRep['dim_im_out_x'] + # Use output spatial size (1D or 2D) + if 'dim_im_out_x' in opRep and 'dim_im_out_y' in opRep: + numPx = opRep['dim_im_out_x'] * opRep['dim_im_out_y'] + elif 'dim_im_out_x' in opRep: + numPx = opRep['dim_im_out_x'] + elif 'dim_im_out_y' in opRep: + numPx = opRep['dim_im_out_y'] + else: + raise KeyError("ConvTransposeLayer: missing output spatial dims")
641-666
: Fix channels_last output-channel inference for ConvTransposechannels_last branch uses weight_shape[-2] (spatial), not C_out//group. Use weight_shape[1] for both 1D/2D and multiply by group.
else: - # For 1D: weight_shape = [C_in, C_out // group, kW] - # For 2D: weight_shape = [C_in, C_out // group, kH, kW] - ch_out = weight_shape[-2] * group + # For 1D/2D: weight_shape = [C_in, C_out // group, ...] + ch_out = weight_shape[1] * group newOutputShapes[0][-1] = ch_outDeeploy/Targets/Generic/TypeCheckers.py (1)
620-627
: ConvTranspose nLevels uses wrong channel dim; account for groupsONNX ConvTranspose weight shape is (C, M/group, k...); accumulation should use input channels per group, not weight.shape[1]. Also support 'group'/'groups'.
- weight = inputs[1] - return [ - np.prod(operatorRepresentation['kernel_shape']) * weight.nLevels * weight.shape[1] * # ch_im_in - 2**(self.input_types[0].referencedType.typeWidth) - ] + weight = inputs[1] + groups = int(operatorRepresentation.get('group', operatorRepresentation.get('groups', 1)) or 1) + ch_in_per_group = int(weight.shape[0]) // groups + return [ + int(np.prod(operatorRepresentation['kernel_shape'])) * int(weight.nLevels) * ch_in_per_group * + 2**(self.input_types[0].referencedType.typeWidth) + ]Deeploy/Targets/Generic/Parsers.py (3)
2632-2669
: BatchNorm: respect channels_first, validate rank, and set nodeName/nodeOp
- Validate input is 3D.
- Use channels_first to extract C/W.
- Provide template identifiers.
def parseNodeCtxt(self, ctxt, node: gs.Node, channels_first: bool = True): @@ - input_shape = ctxt.lookup(node.inputs[0].name).shape - # Save input shape information - self.operatorRepresentation['batch_size'] = input_shape[0] - self.operatorRepresentation['channel_size'] = input_shape[1] - self.operatorRepresentation['window_size'] = input_shape[2] + input_shape = ctxt.lookup(node.inputs[0].name).shape + if input_shape is None or len(input_shape) != 3: + return ctxt, False + self.operatorRepresentation['batch_size'] = input_shape[0] + if channels_first: + self.operatorRepresentation['channel_size'] = input_shape[1] + self.operatorRepresentation['window_size'] = input_shape[2] + else: + self.operatorRepresentation['channel_size'] = input_shape[2] + self.operatorRepresentation['window_size'] = input_shape[1] + self.operatorRepresentation['nodeName'] = node.name + self.operatorRepresentation['nodeOp'] = node.op
2755-2774
: ConvTranspose1D.parseNode: return combined statusCurrently returns wellFormed, ignoring 1D constraints; return both.
- return wellFormed + return wellFormed and ret
2776-2796
: ConvTranspose1D.parseNodeCtxt: preserve newCtxt and validate ranksDon’t drop updated context; assert 3D tensors.
- newCtxt, ret = super().parseNodeCtxt(ctxt, node, channels_first) - - if ret: + newCtxt, ret = super().parseNodeCtxt(ctxt, node, channels_first) + if not ret: + return ctxt, False + # Expect N,C,W (or N,W,C) + if len(newCtxt.lookup(node.inputs[0].name).shape) != 3 or len(newCtxt.lookup(node.outputs[0].name).shape) != 3: + return newCtxt, False + if ret: @@ - return ctxt, True + return newCtxt, TrueDeeploy/Targets/Generic/Bindings.py (1)
23-27
: Critical: Missing import for ConvTransposeChecker (previously flagged).
ConvTransposeChecker
is referenced on lines 299 and 306 but is not imported. This causes aNameError
at import time.Apply this diff:
from Deeploy.Targets.Generic.TypeCheckers import AddChecker, BatchNormChecker, ConcatChecker, ConvChecker, \ - DebugPrintChecker, DequantChecker, DivChecker, DummyChecker, GatherChecker, GELUChecker, GEMMChecker, \ + ConvTransposeChecker, DebugPrintChecker, DequantChecker, DivChecker, DummyChecker, GatherChecker, GELUChecker, GEMMChecker, \ LayerNormChecker, MatMulChecker, MaxPoolChecker, MulChecker, PadChecker, QuantChecker, ReduceMeanChecker, \ ReduceSumChecker, ReluChecker, RequantShiftChecker, ReshapeChecker, RQIntegerDivChecker, SliceChecker, \ SoftmaxChecker, TransposeChecker
🧹 Nitpick comments (4)
Deeploy/Targets/Generic/TypeCheckers.py (1)
606-613
: Silence ARG002 or use inputs in BatchNormCheckerBoth methods ignore parameters; prefix with underscore or add minimal assertions to satisfy Ruff ARG002.
- def _inferNumLevels(self, inputs: List[VariableBuffer], - operatorRepresentation: OperatorRepresentation) -> List[int]: + def _inferNumLevels(self, _inputs: List[VariableBuffer], + _operatorRepresentation: OperatorRepresentation) -> List[int]: return [2**(self.input_types[0].referencedType.typeWidth)] @@ - def _inferSignedness(self, inputs: List[VariableBuffer], - operatorRepresentation: OperatorRepresentation) -> List[bool]: + def _inferSignedness(self, _inputs: List[VariableBuffer], + _operatorRepresentation: OperatorRepresentation) -> List[bool]: return [True]Deeploy/Targets/Generic/Parsers.py (3)
343-347
: Narrow exception and drop unused var in PadParserAvoid broad except and unused variable; keep pads as list when conversion fails.
- try: - self.operatorRepresentation['pads'] = [int(p) for p in node.attrs['pads']] - except Exception as e: - self.operatorRepresentation['pads'] = node.attrs['pads'] + try: + self.operatorRepresentation['pads'] = [int(p) for p in node.attrs['pads']] + except (ValueError, TypeError): + self.operatorRepresentation['pads'] = list(node.attrs['pads'])
1381-1401
: Remove duplicate batchOffset computations in Conv1DParserSame expressions are assigned twice.
self.operatorRepresentation['batchOffsetOut'] = self.operatorRepresentation[ 'ch_im_out'] * self.operatorRepresentation['dim_im_out_y'] - self.operatorRepresentation[ - 'batchOffsetIn'] = self.operatorRepresentation['ch_im_in'] * self.operatorRepresentation['dim_im_in_y'] - self.operatorRepresentation['batchOffsetOut'] = self.operatorRepresentation[ - 'ch_im_out'] * self.operatorRepresentation['dim_im_out_y']
2201-2212
: Avoid raising in parser loop; guard length insteadIndexError here is avoidable; just check length before indexing.
- for idx, inputNode in enumerate(node.inputs): - if idx >= len(inputs): - raise IndexError( - f"Index {idx} out of range for inputs of length {len(inputs)} in node {inputNode.name}") - self.operatorRepresentation[inputs[idx]] = ctxt.lookup(inputNode.name).name + if len(node.inputs) > len(inputs): + return newCtxt, False + for idx, inputNode in enumerate(node.inputs): + self.operatorRepresentation[inputs[idx]] = ctxt.lookup(inputNode.name).name
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
CHANGELOG.md
(3 hunks)Deeploy/CommonExtensions/TypeCheckers/SignPropTypeChecker.py
(1 hunks)Deeploy/DeeployTypes.py
(0 hunks)Deeploy/Targets/Generic/Bindings.py
(5 hunks)Deeploy/Targets/Generic/Layers.py
(1 hunks)Deeploy/Targets/Generic/Parsers.py
(9 hunks)Deeploy/Targets/Generic/Platform.py
(5 hunks)Deeploy/Targets/Generic/Templates/BatchNormalizationTemplate.py
(1 hunks)Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py
(1 hunks)Deeploy/Targets/Generic/Templates/FloatConvTemplate.py
(1 hunks)Deeploy/Targets/Generic/Templates/FloatMaxPoolTemplate.py
(1 hunks)Deeploy/Targets/Generic/Templates/FloatPadTemplate.py
(1 hunks)Deeploy/Targets/Generic/TypeCheckers.py
(1 hunks)Deeploy/Targets/MemPool/Platform.py
(2 hunks)
💤 Files with no reviewable changes (1)
- Deeploy/DeeployTypes.py
🚧 Files skipped from review as they are similar to previous changes (6)
- Deeploy/Targets/Generic/Templates/FloatPadTemplate.py
- Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py
- Deeploy/Targets/MemPool/Platform.py
- Deeploy/Targets/Generic/Templates/FloatConvTemplate.py
- Deeploy/Targets/Generic/Templates/FloatMaxPoolTemplate.py
- CHANGELOG.md
🧰 Additional context used
🧬 Code graph analysis (7)
Deeploy/Targets/Generic/Bindings.py (4)
Deeploy/Targets/Generic/TypeCheckers.py (6)
BatchNormChecker
(601-612)ConvChecker
(456-474)MaxPoolChecker
(439-453)PadChecker
(71-85)TransposeChecker
(54-68)ConvTransposeChecker
(615-632)Deeploy/DeeployTypes.py (1)
NodeBinding
(1568-1712)Deeploy/AbstractDataTypes.py (1)
PointerClass
(536-559)Deeploy/CommonExtensions/DataTypes.py (2)
int8_t
(12-15)int32_t
(24-27)
Deeploy/Targets/Generic/TypeCheckers.py (3)
Deeploy/CommonExtensions/TypeCheckers/SignPropTypeChecker.py (3)
SignPropTypeChecker
(14-67)_inferNumLevels
(16-18)_inferSignedness
(20-22)Deeploy/AbstractDataTypes.py (3)
Pointer
(303-388)typeWidth
(399-400)nLevels
(194-195)Deeploy/DeeployTypes.py (2)
inputs
(2558-2575)VariableBuffer
(232-391)
Deeploy/Targets/Generic/Parsers.py (1)
Deeploy/DeeployTypes.py (7)
parseNode
(1089-1104)parseNodeCtxt
(1107-1132)lookup
(776-808)inputs
(2558-2575)NodeParser
(1079-1254)outputs
(2577-2594)NetworkContext
(564-1076)
Deeploy/Targets/Generic/Layers.py (1)
Deeploy/DeeployTypes.py (4)
ONNXLayer
(1874-2202)NodeMapper
(1715-1871)computeOps
(1890-1895)computeShapes
(1899-1921)
Deeploy/Targets/Generic/Templates/BatchNormalizationTemplate.py (1)
Deeploy/DeeployTypes.py (1)
NodeTemplate
(87-229)
Deeploy/CommonExtensions/TypeCheckers/SignPropTypeChecker.py (1)
Deeploy/AbstractDataTypes.py (4)
IntegerImmediate
(173-224)fitsNumLevels
(223-224)fitsNumLevels
(346-347)nLevels
(194-195)
Deeploy/Targets/Generic/Platform.py (3)
Deeploy/Targets/Generic/Layers.py (3)
TransposeLayer
(441-444)BatchNormalizationLayer
(623-633)ConvTransposeLayer
(636-684)Deeploy/Targets/Generic/Parsers.py (5)
TransposeParser
(158-185)BatchNormParser
(2632-2668)ConvTransposeParser
(2671-2746)MaxPool1DParser
(224-263)ConvTranspose1DParser
(2749-2796)Deeploy/DeeployTypes.py (1)
NodeMapper
(1715-1871)
🪛 Ruff (0.14.0)
Deeploy/Targets/Generic/Bindings.py
299-299: Undefined name ConvTransposeChecker
(F821)
306-306: Undefined name ConvTransposeChecker
(F821)
Deeploy/Targets/Generic/TypeCheckers.py
606-606: Unused method argument: inputs
(ARG002)
607-607: Unused method argument: operatorRepresentation
(ARG002)
610-610: Unused method argument: inputs
(ARG002)
611-611: Unused method argument: operatorRepresentation
(ARG002)
630-630: Unused method argument: operatorRepresentation
(ARG002)
Deeploy/Targets/Generic/Parsers.py
345-345: Do not catch blind exception: Exception
(BLE001)
345-345: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
2210-2211: Avoid specifying long messages outside the exception class
(TRY003)
2652-2652: Unused method argument: channels_first
(ARG002)
2697-2697: Unused method argument: channels_first
(ARG002)
2730-2730: Local variable kernel_shape_y
is assigned to but never used
Remove assignment to unused variable kernel_shape_y
(F841)
Deeploy/Targets/Generic/Platform.py
33-33: Undefined name BasicConv1DBindings
(F821)
🔇 Additional comments (13)
Deeploy/Targets/Generic/Parsers.py (3)
229-244
: MaxPool1D: spec checks and fields look goodShape/attr validation and operatorRepresentation fields are consistent.
245-264
: MaxPool1D parseNodeCtxt: LGTMCorrect handling of channels_first and 3D rank check.
2676-2696
: ConvTranspose parseNode: LGTMAttributes are captured with sensible defaults; nodeName/nodeOp provided.
Deeploy/Targets/Generic/Templates/BatchNormalizationTemplate.py (1)
3-11
: No action required—the parser already sets nodeName/nodeOp.The verification shows the BatchNormParser sets both
operatorRepresentation['nodeName']
andoperatorRepresentation['nodeOp']
at lines 2693–2694. The template placeholders on line 4 will resolve without error.Likely an incorrect or invalid review comment.
Deeploy/Targets/Generic/Platform.py (4)
33-33
: LGTM once import is fixed.The update from
BasicConv1DBinding
toBasicConv1DBindings
correctly reflects the refactor inBindings.py
that now provides per-type bindings (float and int8) for 1D convolution. Ensure the import on line 9 is corrected as noted above.
51-51
: LGTM!The new
MaxPool1DMapper
follows established patterns and is correctly wired intoGenericMapping
at line 97 alongside the existing 2D variant.
68-69
: LGTM!The new mappers for batch normalization and transposed convolution follow established patterns and are correctly integrated into the platform mapping.
97-97
: LGTM!The
GenericMapping
updates correctly expose the new 1D/2D pooling variants and the new BatchNormalization and ConvTranspose operations. The structure follows existing conventions.Also applies to: 113-114
Deeploy/Targets/Generic/Bindings.py (5)
57-64
: LGTM!The refactor from a single
BasicConv1DBinding
toBasicConv1DBindings
list correctly enables multi-type support (float and int8) for 1D convolution, following the same pattern as the 2D variant. This aligns with the usage inPlatform.py
line 33.
157-160
: LGTM!The new
BasicMaxPool1DBindings
correctly implements 1D max pooling for floating-point types, following the established binding pattern.
179-185
: LGTM!The extension of
BasicPad1DBindings
to support floating-point types alongside the existing signed integer types is consistent with the 2D padding pattern and correctly usesFloatPadTemplate.reference1DTemplate
.
286-295
: LGTM!The new
BasicBatchNormBindings
correctly implements batch normalization for floating-point types with the standard 5-input signature (data, scale, bias, mean, variance). The structure follows established patterns.
297-311
: LGTM once import is fixed.The new
BasicConvTransposeBindings
correctly handles both biased and unbiased transposed convolution variants for floating-point types. The structure is sound, but ensureConvTransposeChecker
is imported as noted in the comment on lines 23-27.
def parseNodeCtxt(self, ctxt: NetworkContext, node: gs.Node, channels_first: bool = True): | ||
# Register buffer names for codegen | ||
self.operatorRepresentation['data_in'] = node.inputs[0].name | ||
self.operatorRepresentation['weight'] = node.inputs[1].name | ||
self.operatorRepresentation['data_out'] = node.outputs[0].name | ||
if len(node.inputs) == 3: | ||
self.operatorRepresentation['bias'] = node.inputs[2].name | ||
self.operatorRepresentation['has_bias'] = True | ||
else: | ||
self.operatorRepresentation['has_bias'] = False | ||
# Get output shape from context | ||
data_out = ctxt.lookup(node.outputs[0].name) | ||
out_shape = data_out.shape | ||
if len(out_shape) == 3: | ||
self.operatorRepresentation['dim_im_out_x'] = out_shape[2] | ||
elif len(out_shape) == 4: | ||
self.operatorRepresentation['dim_im_out_x'] = out_shape[2] | ||
self.operatorRepresentation['dim_im_out_y'] = out_shape[3] | ||
|
||
stride_x, stride_y = 1, 1 | ||
if "strides" in node.attrs: | ||
stride_y = node.attrs["strides"][0] | ||
stride_x = node.attrs["strides"][1] if len(node.attrs["strides"]) > 1 else stride_y | ||
self.operatorRepresentation["stride_y"] = stride_y | ||
self.operatorRepresentation["stride_x"] = stride_x | ||
|
||
if "kernel_shape" in node.attrs: | ||
kernel_shape = node.attrs["kernel_shape"] | ||
kernel_shape_x = kernel_shape[0] | ||
# For 2D, kernel_shape may have two elements | ||
kernel_shape_y = kernel_shape[1] if len(kernel_shape) > 1 else kernel_shape_x | ||
else: | ||
kernel_shape_x = 1 | ||
kernel_shape_y = 1 | ||
|
||
data_in = ctxt.lookup(node.inputs[0].name) | ||
data_out = ctxt.lookup(node.outputs[0].name) | ||
in_shape = data_in.shape | ||
out_shape = data_out.shape | ||
|
||
self.operatorRepresentation['ch_im_in'] = in_shape[1] | ||
self.operatorRepresentation['dim_im_in_y'] = in_shape[2] | ||
self.operatorRepresentation['ch_im_out'] = out_shape[1] | ||
self.operatorRepresentation['dim_im_out_y'] = out_shape[2] | ||
|
||
self.operatorRepresentation[ | ||
'batchOffsetIn'] = self.operatorRepresentation['ch_im_in'] * self.operatorRepresentation['dim_im_in_y'] | ||
self.operatorRepresentation[ | ||
'batchOffsetOut'] = self.operatorRepresentation['ch_im_out'] * self.operatorRepresentation['dim_im_out_y'] | ||
return ctxt, True | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion | 🟠 Major
ConvTranspose parseNodeCtxt: honor channels_first and avoid unused vars
- Extract C/spatial based on channels_first for 1D and 2D.
- Remove unused kernel_shape_y and compute dims accordingly.
- Keep batch offsets consistent.
- if len(out_shape) == 3:
- self.operatorRepresentation['dim_im_out_x'] = out_shape[2]
- elif len(out_shape) == 4:
- self.operatorRepresentation['dim_im_out_x'] = out_shape[2]
- self.operatorRepresentation['dim_im_out_y'] = out_shape[3]
+ if len(out_shape) == 3:
+ if channels_first:
+ self.operatorRepresentation['dim_im_out_y'] = out_shape[2]
+ self.operatorRepresentation['ch_im_out'] = out_shape[1]
+ else:
+ self.operatorRepresentation['dim_im_out_y'] = out_shape[1]
+ self.operatorRepresentation['ch_im_out'] = out_shape[2]
+ elif len(out_shape) == 4:
+ if channels_first:
+ self.operatorRepresentation['dim_im_out_x'] = out_shape[2]
+ self.operatorRepresentation['dim_im_out_y'] = out_shape[3]
+ self.operatorRepresentation['ch_im_out'] = out_shape[1]
+ else:
+ self.operatorRepresentation['dim_im_out_x'] = out_shape[1]
+ self.operatorRepresentation['dim_im_out_y'] = out_shape[2]
+ self.operatorRepresentation['ch_im_out'] = out_shape[3]
@@
- if "kernel_shape" in node.attrs:
- kernel_shape = node.attrs["kernel_shape"]
- kernel_shape_x = kernel_shape[0]
- # For 2D, kernel_shape may have two elements
- kernel_shape_y = kernel_shape[1] if len(kernel_shape) > 1 else kernel_shape_x
- else:
- kernel_shape_x = 1
- kernel_shape_y = 1
+ # kernel_shape handled in parseNode; nothing extra needed here
@@
- self.operatorRepresentation['ch_im_in'] = in_shape[1]
- self.operatorRepresentation['dim_im_in_y'] = in_shape[2]
- self.operatorRepresentation['ch_im_out'] = out_shape[1]
- self.operatorRepresentation['dim_im_out_y'] = out_shape[2]
+ if len(in_shape) == 3:
+ if channels_first:
+ self.operatorRepresentation['ch_im_in'] = in_shape[1]
+ self.operatorRepresentation['dim_im_in_y'] = in_shape[2]
+ else:
+ self.operatorRepresentation['ch_im_in'] = in_shape[2]
+ self.operatorRepresentation['dim_im_in_y'] = in_shape[1]
+ elif len(in_shape) == 4:
+ if channels_first:
+ self.operatorRepresentation['ch_im_in'] = in_shape[1]
+ else:
+ self.operatorRepresentation['ch_im_in'] = in_shape[3]
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.14.0)
2697-2697: Unused method argument: channels_first
(ARG002)
2730-2730: Local variable kernel_shape_y
is assigned to but never used
Remove assignment to unused variable kernel_shape_y
(F841)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (6)
Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py (1)
68-77
: Critical: Batch loop doesn't advance input/output pointers.All batch iterations process the first batch slice because the pointers are never incremented. This causes incorrect results for batched inputs.
Apply this diff to fix the pointer arithmetic:
for (uint32_t n=0; n<${batch}; ++n) { + ${data_in_type.typeName} * in_ptr = ref_${data_out}_${data_in} + n * batchOffsetIn; + ${data_out_type.typeName} * out_ptr = ref_${data_out}_${data_out} + n * batchOffsetOut; ConvTranspose1d_fp32( - ref_${data_out}_${data_in}, ${ch_im_in}, ${dim_im_in_y}, + in_ptr, ${ch_im_in}, ${dim_im_in_y}, ${weight}, ${ch_im_out}, ${dim_kernel_y}, ${stride_y}, ${bias}, ${has_bias}, - ref_${data_out}_${data_out}, ${dim_im_out_y} + out_ptr, ${dim_im_out_y} ); }Deeploy/Targets/Generic/Parsers.py (4)
2632-2668
: BatchNormParser: Validate rank and respect channels_first parameter.The parser assumes input shape is always [N, C, W] and ignores the
channels_first
parameter. This breaks when data uses channels-last layout or has unexpected rank.Apply this diff to add validation and handle both layouts:
def parseNodeCtxt(self, ctxt, node: gs.Node, channels_first: bool = True): inputs = ['data_in', 'scale', 'bias', 'mean', 'variance'] outputs = ['data_out'] for idx, inputNode in enumerate(node.inputs[:5]): self.operatorRepresentation[inputs[idx]] = ctxt.lookup(inputNode.name).name # Output (Y) self.operatorRepresentation[outputs[0]] = ctxt.lookup(node.outputs[0].name).name input_shape = ctxt.lookup(node.inputs[0].name).shape + # Validate 3D input + if len(input_shape) != 3: + return ctxt, False # Save input shape information self.operatorRepresentation['batch_size'] = input_shape[0] - self.operatorRepresentation['channel_size'] = input_shape[1] - self.operatorRepresentation['window_size'] = input_shape[2] + if channels_first: + self.operatorRepresentation['channel_size'] = input_shape[1] + self.operatorRepresentation['window_size'] = input_shape[2] + else: + self.operatorRepresentation['channel_size'] = input_shape[2] + self.operatorRepresentation['window_size'] = input_shape[1] return ctxt, True
2697-2746
: ConvTransposeParser: Honor channels_first and remove unused variables.The parser ignores the
channels_first
parameter when extracting shape dimensions and declareskernel_shape_y
without using it. This causes incorrect shape extraction for channels-last layouts.The implementation needs to:
- Respect
channels_first
when extracting channels and spatial dimensions for both 1D and 2D cases- Remove unused
kernel_shape_y
variableApply this diff:
def parseNodeCtxt(self, ctxt: NetworkContext, node: gs.Node, channels_first: bool = True): # Register buffer names for codegen self.operatorRepresentation['data_in'] = node.inputs[0].name self.operatorRepresentation['weight'] = node.inputs[1].name self.operatorRepresentation['data_out'] = node.outputs[0].name if len(node.inputs) == 3: self.operatorRepresentation['bias'] = node.inputs[2].name self.operatorRepresentation['has_bias'] = True else: self.operatorRepresentation['has_bias'] = False - # Get output shape from context - data_out = ctxt.lookup(node.outputs[0].name) - out_shape = data_out.shape - if len(out_shape) == 3: - self.operatorRepresentation['dim_im_out_x'] = out_shape[2] - elif len(out_shape) == 4: - self.operatorRepresentation['dim_im_out_x'] = out_shape[2] - self.operatorRepresentation['dim_im_out_y'] = out_shape[3] - - stride_x, stride_y = 1, 1 - if "strides" in node.attrs: - stride_y = node.attrs["strides"][0] - stride_x = node.attrs["strides"][1] if len(node.attrs["strides"]) > 1 else stride_y - self.operatorRepresentation["stride_y"] = stride_y - self.operatorRepresentation["stride_x"] = stride_x - - if "kernel_shape" in node.attrs: - kernel_shape = node.attrs["kernel_shape"] - kernel_shape_x = kernel_shape[0] - # For 2D, kernel_shape may have two elements - kernel_shape_y = kernel_shape[1] if len(kernel_shape) > 1 else kernel_shape_x - else: - kernel_shape_x = 1 - kernel_shape_y = 1 data_in = ctxt.lookup(node.inputs[0].name) data_out = ctxt.lookup(node.outputs[0].name) in_shape = data_in.shape out_shape = data_out.shape - self.operatorRepresentation['ch_im_in'] = in_shape[1] - self.operatorRepresentation['dim_im_in_y'] = in_shape[2] - self.operatorRepresentation['ch_im_out'] = out_shape[1] - self.operatorRepresentation['dim_im_out_y'] = out_shape[2] + # Extract input/output dimensions respecting channels_first + if len(in_shape) == 3: + if channels_first: + self.operatorRepresentation['ch_im_in'] = in_shape[1] + self.operatorRepresentation['dim_im_in_y'] = in_shape[2] + else: + self.operatorRepresentation['ch_im_in'] = in_shape[2] + self.operatorRepresentation['dim_im_in_y'] = in_shape[1] + + if len(out_shape) == 3: + if channels_first: + self.operatorRepresentation['ch_im_out'] = out_shape[1] + self.operatorRepresentation['dim_im_out_y'] = out_shape[2] + else: + self.operatorRepresentation['ch_im_out'] = out_shape[2] + self.operatorRepresentation['dim_im_out_y'] = out_shape[1] + elif len(out_shape) == 4: + if channels_first: + self.operatorRepresentation['ch_im_out'] = out_shape[1] + self.operatorRepresentation['dim_im_out_x'] = out_shape[2] + self.operatorRepresentation['dim_im_out_y'] = out_shape[3] + else: + self.operatorRepresentation['ch_im_out'] = out_shape[3] + self.operatorRepresentation['dim_im_out_x'] = out_shape[1] + self.operatorRepresentation['dim_im_out_y'] = out_shape[2] self.operatorRepresentation[ 'batchOffsetIn'] = self.operatorRepresentation['ch_im_in'] * self.operatorRepresentation['dim_im_in_y'] self.operatorRepresentation[ 'batchOffsetOut'] = self.operatorRepresentation['ch_im_out'] * self.operatorRepresentation['dim_im_out_y'] return ctxt, True
2754-2773
: Bug: parseNode returns wrong validation status.The method computes
ret
for 1D-specific constraints but returnswellFormed
from the superclass. This allows invalid 1D configurations to pass validation.Apply this diff:
def parseNode(self, node: gs.Node) -> bool: # 1D ConvTranspose expects 3D input/output and 3D weight wellFormed = super().parseNode(node) ret = False if wellFormed: ret = all([ # Make sure strides are 2D len(node.attrs['strides']) == 1, len(node.attrs['pads']) == 2, len(node.attrs['dilations']) == 1, ]) if ret: self.operatorRepresentation['kernel_shape'] = node.attrs['kernel_shape'] self.operatorRepresentation['dim_kernel_y'] = int(self.operatorRepresentation['kernel_shape'][0]) self.operatorRepresentation['dilation_y'] = int(self.operatorRepresentation['dilations'][0]) self.operatorRepresentation['padding_y'] = int(self.operatorRepresentation['pads'][0]) self.operatorRepresentation['stride_y'] = int(self.operatorRepresentation['strides'][0]) - return wellFormed + return ret
2775-2795
: Critical: parseNodeCtxt discards updated context and skips validation.Returning
(ctxt, True)
discardsnewCtxt
from the superclass and unconditionally returns success, bypassing shape validation. This can lead to runtime errors with invalid configurations.Apply this diff:
def parseNodeCtxt(self, ctxt: NetworkContext, node: gs.Node, channels_first: bool = True) -> Tuple[NetworkContext, bool]: newCtxt, ret = super().parseNodeCtxt(ctxt, node, channels_first) + if not ret: + return ctxt, False + if ret: data_in = newCtxt.lookup(node.inputs[0].name) data_out = newCtxt.lookup(node.outputs[0].name) in_shape = data_in.shape out_shape = data_out.shape + + # Validate 3D tensors + if len(in_shape) != 3 or len(out_shape) != 3: + return ctxt, False + self.operatorRepresentation['ch_im_in'] = in_shape[1] self.operatorRepresentation['dim_im_in_y'] = in_shape[2] self.operatorRepresentation['ch_im_out'] = out_shape[1] self.operatorRepresentation['dim_im_out_y'] = out_shape[2] self.operatorRepresentation[ "batchOffsetIn"] = self.operatorRepresentation["ch_im_in"] * self.operatorRepresentation["dim_im_in_y"] self.operatorRepresentation["batchOffsetOut"] = self.operatorRepresentation[ "ch_im_out"] * self.operatorRepresentation["dim_im_out_y"] - return ctxt, True + return newCtxt, TrueDeeploy/Targets/Generic/Bindings.py (1)
23-27
: Critical: Missing import for ConvTransposeChecker.Lines 299 and 306 reference
ConvTransposeChecker
, but it's not imported. This will cause aNameError
at import time.Apply this diff:
from Deeploy.Targets.Generic.TypeCheckers import AddChecker, BatchNormChecker, ConcatChecker, ConvChecker, \ - DebugPrintChecker, DequantChecker, DivChecker, DummyChecker, GatherChecker, GELUChecker, \ - GEMMChecker, LayerNormChecker, MatMulChecker, MaxPoolChecker, MulChecker, PadChecker, QuantChecker, \ - ReduceMeanChecker, ReduceSumChecker, ReluChecker, RequantShiftChecker, ReshapeChecker, RQIntegerDivChecker, \ - SliceChecker, SoftmaxChecker, TransposeChecker + ConvTransposeChecker, DebugPrintChecker, DequantChecker, DivChecker, DummyChecker, GatherChecker, GELUChecker, \ + GEMMChecker, LayerNormChecker, MatMulChecker, MaxPoolChecker, MulChecker, PadChecker, QuantChecker, \ + ReduceMeanChecker, ReduceSumChecker, ReluChecker, RequantShiftChecker, ReshapeChecker, RQIntegerDivChecker, \ + SliceChecker, SoftmaxChecker, TransposeChecker
🧹 Nitpick comments (2)
Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py (1)
14-15
: Remove duplicate comment.The comment "# input/output tensors" appears twice consecutively.
Apply this diff:
def alignToContext(self, ctxt: NetworkContext, operatorRepresentation: OperatorRepresentation) -> Tuple[NetworkContext, Dict, List[str]]: # input/output tensors - # input/output tensors data_in = ctxt.lookup(operatorRepresentation["data_in"]) data_out = ctxt.lookup(operatorRepresentation["data_out"])
Deeploy/Targets/Generic/Parsers.py (1)
343-347
: Avoid catching blind exceptions.The broad
except Exception
clause masks the actual error, and the caught exceptione
is never used. Consider catching specific exceptions or removing the try-except if the int conversion is expected to always succeed.Apply this diff:
if ret: self.operatorRepresentation['mode'] = node.attrs['mode'] - try: - self.operatorRepresentation['pads'] = [int(p) for p in node.attrs['pads']] - except Exception as e: - self.operatorRepresentation['pads'] = node.attrs['pads'] - # self.operatorRepresentation['pads'] = node.attrs['pads'] + # Handle both list and scalar pad values + pads = node.attrs['pads'] + if isinstance(pads, (list, tuple)): + self.operatorRepresentation['pads'] = [int(p) for p in pads] + else: + self.operatorRepresentation['pads'] = pads self.operatorRepresentation['value'] = node.attrs['value']
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
Deeploy/Targets/Generic/Bindings.py
(5 hunks)Deeploy/Targets/Generic/Parsers.py
(9 hunks)Deeploy/Targets/Generic/Platform.py
(5 hunks)Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py
(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py (3)
Deeploy/DeeployTypes.py (4)
NetworkContext
(564-1076)NodeTemplate
(87-229)alignToContext
(119-139)lookup
(776-808)Deeploy/Targets/Generic/Templates/ConvTemplate.py (1)
alignToContext
(15-28)Deeploy/AbstractDataTypes.py (1)
nLevels
(194-195)
Deeploy/Targets/Generic/Parsers.py (1)
Deeploy/DeeployTypes.py (7)
parseNode
(1089-1104)parseNodeCtxt
(1107-1132)lookup
(776-808)inputs
(2558-2575)NodeParser
(1079-1254)outputs
(2577-2594)NetworkContext
(564-1076)
Deeploy/Targets/Generic/Bindings.py (4)
Deeploy/Targets/Generic/TypeCheckers.py (6)
BatchNormChecker
(601-612)ConvChecker
(456-474)ConvTransposeChecker
(615-632)MaxPoolChecker
(439-453)PadChecker
(71-85)TransposeChecker
(54-68)Deeploy/DeeployTypes.py (1)
NodeBinding
(1568-1712)Deeploy/AbstractDataTypes.py (1)
PointerClass
(536-559)Deeploy/CommonExtensions/DataTypes.py (2)
int8_t
(12-15)int32_t
(24-27)
Deeploy/Targets/Generic/Platform.py (3)
Deeploy/Targets/Generic/Layers.py (4)
BatchNormalizationLayer
(623-633)ConvTransposeLayer
(636-684)MaxPoolLayer
(380-391)TransposeLayer
(441-444)Deeploy/Targets/Generic/Parsers.py (4)
BatchNormParser
(2632-2668)ConvTranspose1DParser
(2749-2795)MaxPool1DParser
(224-263)TransposeParser
(158-185)Deeploy/DeeployTypes.py (1)
NodeMapper
(1715-1871)
🪛 Ruff (0.14.0)
Deeploy/Targets/Generic/Parsers.py
345-345: Do not catch blind exception: Exception
(BLE001)
345-345: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
2210-2211: Avoid specifying long messages outside the exception class
(TRY003)
2652-2652: Unused method argument: channels_first
(ARG002)
2697-2697: Unused method argument: channels_first
(ARG002)
2730-2730: Local variable kernel_shape_y
is assigned to but never used
Remove assignment to unused variable kernel_shape_y
(F841)
Deeploy/Targets/Generic/Platform.py
35-35: Undefined name BasicConv1DBindings
(F821)
🔇 Additional comments (9)
Deeploy/Targets/Generic/Platform.py (4)
9-16
: LGTM!The imports correctly reference the new bindings and parsers. The static analysis warning about
BasicConv1DBindings
on line 35 is a false positive—the symbol is properly defined in Bindings.py lines 57-64.
35-35
: LGTM!The Conv1DMapper correctly uses
BasicConv1DBindings
(plural), which is defined as a list in Bindings.py lines 57-64. This replaces the previous single-binding approach with multi-type bindings supporting both float and int8 types.
70-71
: LGTM!The new mappers for BatchNormalization and ConvTranspose are correctly instantiated with their corresponding parsers and bindings.
99-116
: LGTM!The mapping updates correctly:
- MaxPool now supports both 1D and 2D variants via the mapper list
- New BatchNormalization and ConvTranspose entries are properly wired to their respective layers
Deeploy/Targets/Generic/Bindings.py (5)
57-64
: LGTM!The refactored
BasicConv1DBindings
correctly provides multi-type support: FloatDataTypes use the new FloatConvTemplate, while int8_t retains the original ConvTemplate. This enables proper float32 1D convolution support.
157-160
: LGTM!The new
BasicMaxPool1DBindings
correctly maps FloatDataTypes to the 1D float max pooling template, enabling 1D pooling support for floating-point data.
179-185
: LGTM!The extended
BasicPad1DBindings
adds float type support using FloatPadTemplate while preserving existing integer bindings.
286-295
: LGTM!
BasicBatchNormBindings
correctly wires FloatDataTypes to BatchNormalizationTemplate with appropriate type checking for the 5 input tensors (data, scale, bias, mean, variance).
297-311
: LGTM!
BasicConvTransposeBindings
properly handles both 3-input (with bias) and 2-input (no bias) ConvTranspose configurations for FloatDataTypes. The bindings correctly use ConvTransposeChecker and ConvTransposeTemplate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
TargetLibraries/Generic/inc/kernel/BatchNorm.h (1)
8-14
: Include DeeployBasicMath.h to define float32_t.
float32_t
is not a standard type provided bystdint.h
. The implementation file includesDeeployBasicMath.h
, and this header should do the same.Apply this diff:
#include <stdbool.h> #include <stdint.h> +#include "DeeployBasicMath.h" void BatchNorm_fp32(const float32_t *input, const float32_t *gamma, const float32_t *beta, const float32_t *mean, const float32_t *var, float32_t *output, int N, int C, int L);
TargetLibraries/Generic/inc/kernel/ConvTranspose1d_fp32.h (1)
8-14
: Include DeeployBasicMath.h and add C++ linkage guards.
float32_t
is not a standard type, and C++ builds requireextern "C"
linkage. The implementation file includesDeeployBasicMath.h
, so this header should as well.Apply this diff:
#include <stdbool.h> #include <stdint.h> +#include "DeeployBasicMath.h" + +#ifdef __cplusplus +extern "C" { +#endif void ConvTranspose1d_fp32(const float32_t *input, uint32_t C_in, uint32_t W_in, const float32_t *weight, uint32_t C_out, uint32_t K, uint32_t stride, const float32_t *bias, bool has_bias, float32_t *output, uint32_t W_out); +#ifdef __cplusplus +} +#endif #endif // CONV_TRANSPOSE1D_FP32_HDeeploy/Targets/Generic/Templates/ConvTransposeTemplate.py (1)
72-81
: Critical: Batch loop doesn't advance input/output pointers.The for loop iterates over batches but reuses the same input/output pointers for every iteration, causing all batches to process the first batch's data repeatedly.
Apply this diff:
for (uint32_t n=0; n<${batch}; ++n) { + ${data_in_type.typeName} *in_ptr = ref_${data_out}_${data_in} + n * batchOffsetIn; + ${data_out_type.typeName} *out_ptr = ref_${data_out}_${data_out} + n * batchOffsetOut; ConvTranspose1d_fp32( - ref_${data_out}_${data_in}, ${ch_im_in}, ${dim_im_in_y}, + in_ptr, ${ch_im_in}, ${dim_im_in_y}, ${weight}, ${ch_im_out}, ${dim_kernel_y}, ${stride_y}, ${bias}, ${has_bias}, - ref_${data_out}_${data_out}, ${dim_im_out_y} + out_ptr, ${dim_im_out_y} ); }
🧹 Nitpick comments (1)
Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py (1)
18-19
: Remove duplicate comment.Line 19 repeats the comment from line 18.
Apply this diff:
# input/output tensors - # input/output tensors data_in = ctxt.lookup(operatorRepresentation["data_in"])
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
Deeploy/Targets/Generic/Templates/BatchNormalizationTemplate.py
(1 hunks)Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py
(1 hunks)TargetLibraries/Generic/inc/kernel/BatchNorm.h
(1 hunks)TargetLibraries/Generic/inc/kernel/ConvTranspose1d_fp32.h
(1 hunks)TargetLibraries/Generic/inc/kernel/MaxPool1d.h
(1 hunks)TargetLibraries/Generic/src/BatchNorm_fp32.c
(1 hunks)TargetLibraries/Generic/src/ConvTranspose1d_fp32.c
(1 hunks)TargetLibraries/Generic/src/MaxPool1D_fp32.c
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- TargetLibraries/Generic/inc/kernel/MaxPool1d.h
- TargetLibraries/Generic/src/MaxPool1D_fp32.c
- TargetLibraries/Generic/src/BatchNorm_fp32.c
🧰 Additional context used
🧬 Code graph analysis (4)
Deeploy/Targets/Generic/Templates/BatchNormalizationTemplate.py (1)
Deeploy/DeeployTypes.py (1)
NodeTemplate
(87-229)
TargetLibraries/Generic/inc/kernel/ConvTranspose1d_fp32.h (1)
TargetLibraries/Generic/src/ConvTranspose1d_fp32.c (1)
ConvTranspose1d_fp32
(7-50)
TargetLibraries/Generic/inc/kernel/BatchNorm.h (1)
TargetLibraries/Generic/src/BatchNorm_fp32.c (1)
BatchNorm_fp32
(7-28)
Deeploy/Targets/Generic/Templates/ConvTransposeTemplate.py (3)
Deeploy/DeeployTypes.py (4)
NetworkContext
(564-1076)NodeTemplate
(87-229)alignToContext
(119-139)lookup
(776-808)Deeploy/Targets/Generic/Templates/ConvTemplate.py (1)
alignToContext
(15-28)Deeploy/AbstractDataTypes.py (1)
nLevels
(194-195)
🔇 Additional comments (2)
Deeploy/Targets/Generic/Templates/BatchNormalizationTemplate.py (1)
7-15
: LGTM! Template follows established patterns.The BatchNormalization template is correctly structured and follows the same pattern as other kernel templates in the codebase. The parameter mapping to
BatchNorm_fp32
is accurate.TargetLibraries/Generic/src/ConvTranspose1d_fp32.c (1)
7-50
: LGTM! Transposed convolution implementation is correct.The implementation correctly handles:
- Output initialization to zero
- Proper tensor indexing for [C_in, W_in] input, [C_in, C_out, K] weights, and [C_out, W_out] output
- Transposed convolution formula
w_out = w_in * stride + k
- Bounds checking for output width
- Optional bias addition per output channel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
Deeploy/Targets/Generic/Parsers.py (1)
2193-2211
: Inconsistent has_bias representation between 1D and 2D parsers.The 1D parser uses integers (0/1) for
has_bias
while the 2D parser at lines 2282-2287 uses strings ("true"/"false"). This inconsistency may cause issues in template rendering or binding logic.Consider standardizing to match the 2D parser:
if len(node.inputs) > 2: inputs.append("bias") - self.operatorRepresentation["has_bias"] = 1 + self.operatorRepresentation["has_bias"] = "true" else: - self.operatorRepresentation["has_bias"] = 0 + self.operatorRepresentation["has_bias"] = "false" self.operatorRepresentation["bias"] = "NULL"
♻️ Duplicate comments (3)
Deeploy/Targets/Generic/Parsers.py (3)
2627-2664
: Respect channels_first parameter and validate input rank.The
channels_first
parameter is ignored, and the parser unconditionally assumes[N, C, W]
layout. This will produce incorrect results for channels-last data and fail on non-3D inputs.Based on learnings
Apply this diff to validate rank and respect layout:
def parseNodeCtxt(self, ctxt, node: gs.Node, channels_first: bool = True): inputs = ['data_in', 'scale', 'bias', 'mean', 'variance'] outputs = ['data_out'] for idx, inputNode in enumerate(node.inputs[:5]): self.operatorRepresentation[inputs[idx]] = ctxt.lookup(inputNode.name).name # Output (Y) self.operatorRepresentation[outputs[0]] = ctxt.lookup(node.outputs[0].name).name input_shape = ctxt.lookup(node.inputs[0].name).shape + + # Validate rank + if len(input_shape) != 3: + return ctxt, False + # Save input shape information self.operatorRepresentation['batch_size'] = input_shape[0] - self.operatorRepresentation['channel_size'] = input_shape[1] - self.operatorRepresentation['window_size'] = input_shape[2] + if channels_first: + self.operatorRepresentation['channel_size'] = input_shape[1] + self.operatorRepresentation['window_size'] = input_shape[2] + else: + self.operatorRepresentation['channel_size'] = input_shape[2] + self.operatorRepresentation['window_size'] = input_shape[1] return ctxt, True
2692-2742
: Respect channels_first parameter and remove unused variable.The
channels_first
parameter is unused, and dimension extraction assumes channels-first layout. Additionally,kernel_shape_y
is assigned but never used.Based on learnings
Apply this diff to fix the issues:
def parseNodeCtxt(self, ctxt: NetworkContext, node: gs.Node, channels_first: bool = True): # Register buffer names for codegen self.operatorRepresentation['data_in'] = node.inputs[0].name self.operatorRepresentation['weight'] = node.inputs[1].name self.operatorRepresentation['data_out'] = node.outputs[0].name if len(node.inputs) == 3: self.operatorRepresentation['bias'] = node.inputs[2].name self.operatorRepresentation['has_bias'] = True else: self.operatorRepresentation['has_bias'] = False - # Get output shape from context + data_out = ctxt.lookup(node.outputs[0].name) + data_in = ctxt.lookup(node.inputs[0].name) out_shape = data_out.shape + in_shape = data_in.shape + + # Extract dimensions respecting channels_first if len(out_shape) == 3: - self.operatorRepresentation['dim_im_out_x'] = out_shape[2] + if channels_first: + self.operatorRepresentation['ch_im_out'] = out_shape[1] + self.operatorRepresentation['dim_im_out_y'] = out_shape[2] + else: + self.operatorRepresentation['ch_im_out'] = out_shape[2] + self.operatorRepresentation['dim_im_out_y'] = out_shape[1] elif len(out_shape) == 4: - self.operatorRepresentation['dim_im_out_x'] = out_shape[2] - self.operatorRepresentation['dim_im_out_y'] = out_shape[3] + if channels_first: + self.operatorRepresentation['ch_im_out'] = out_shape[1] + self.operatorRepresentation['dim_im_out_x'] = out_shape[2] + self.operatorRepresentation['dim_im_out_y'] = out_shape[3] + else: + self.operatorRepresentation['ch_im_out'] = out_shape[3] + self.operatorRepresentation['dim_im_out_x'] = out_shape[1] + self.operatorRepresentation['dim_im_out_y'] = out_shape[2] stride_x, stride_y = 1, 1 if "strides" in node.attrs: stride_y = node.attrs["strides"][0] stride_x = node.attrs["strides"][1] if len(node.attrs["strides"]) > 1 else stride_y self.operatorRepresentation["stride_y"] = stride_y self.operatorRepresentation["stride_x"] = stride_x - if "kernel_shape" in node.attrs: - kernel_shape = node.attrs["kernel_shape"] - kernel_shape_x = kernel_shape[0] - # For 2D, kernel_shape may have two elements - kernel_shape_y = kernel_shape[1] if len(kernel_shape) > 1 else kernel_shape_x - else: - kernel_shape_x = 1 - kernel_shape_y = 1 - - data_in = ctxt.lookup(node.inputs[0].name) - data_out = ctxt.lookup(node.outputs[0].name) - in_shape = data_in.shape - out_shape = data_out.shape - - self.operatorRepresentation['ch_im_in'] = in_shape[1] - self.operatorRepresentation['dim_im_in_y'] = in_shape[2] - self.operatorRepresentation['ch_im_out'] = out_shape[1] - self.operatorRepresentation['dim_im_out_y'] = out_shape[2] + # Extract input dimensions respecting channels_first + if len(in_shape) == 3: + if channels_first: + self.operatorRepresentation['ch_im_in'] = in_shape[1] + self.operatorRepresentation['dim_im_in_y'] = in_shape[2] + else: + self.operatorRepresentation['ch_im_in'] = in_shape[2] + self.operatorRepresentation['dim_im_in_y'] = in_shape[1] + elif len(in_shape) == 4: + if channels_first: + self.operatorRepresentation['ch_im_in'] = in_shape[1] + else: + self.operatorRepresentation['ch_im_in'] = in_shape[3] self.operatorRepresentation[ 'batchOffsetIn'] = self.operatorRepresentation['ch_im_in'] * self.operatorRepresentation['dim_im_in_y'] self.operatorRepresentation[ 'batchOffsetOut'] = self.operatorRepresentation['ch_im_out'] * self.operatorRepresentation['dim_im_out_y'] return ctxt, True
2744-2790
: Critical: Fix incorrect return values in parseNode and parseNodeCtxt.Two critical bugs will cause invalid configurations to pass and lose context updates:
- Line 2768:
parseNode
returnswellFormed
instead ofret
, allowing invalid 1D configurations to pass even when the 1D constraint checks fail.- Line 2790:
parseNodeCtxt
unconditionally returns(ctxt, True)
, discarding the updatednewCtxt
and ignoring theret
flag. This loses context updates and allows invalid nodes to pass.Based on learnings
Apply this diff to fix both bugs:
def parseNode(self, node: gs.Node) -> bool: # 1D ConvTranspose expects 3D input/output and 3D weight wellFormed = super().parseNode(node) ret = False if wellFormed: ret = all([ # Make sure strides are 2D len(node.attrs['strides']) == 1, len(node.attrs['pads']) == 2, len(node.attrs['dilations']) == 1, ]) if ret: self.operatorRepresentation['kernel_shape'] = node.attrs['kernel_shape'] self.operatorRepresentation['dim_kernel_y'] = int(self.operatorRepresentation['kernel_shape'][0]) self.operatorRepresentation['dilation_y'] = int(self.operatorRepresentation['dilations'][0]) self.operatorRepresentation['padding_y'] = int(self.operatorRepresentation['pads'][0]) self.operatorRepresentation['stride_y'] = int(self.operatorRepresentation['strides'][0]) - return wellFormed + return ret def parseNodeCtxt(self, ctxt: NetworkContext, node: gs.Node, channels_first: bool = True) -> Tuple[NetworkContext, bool]: newCtxt, ret = super().parseNodeCtxt(ctxt, node, channels_first) if ret: data_in = newCtxt.lookup(node.inputs[0].name) data_out = newCtxt.lookup(node.outputs[0].name) in_shape = data_in.shape out_shape = data_out.shape + + # Validate 3D tensors + if len(in_shape) != 3 or len(out_shape) != 3: + return ctxt, False + self.operatorRepresentation['ch_im_in'] = in_shape[1] self.operatorRepresentation['dim_im_in_y'] = in_shape[2] self.operatorRepresentation['ch_im_out'] = out_shape[1] self.operatorRepresentation['dim_im_out_y'] = out_shape[2] self.operatorRepresentation[ "batchOffsetIn"] = self.operatorRepresentation["ch_im_in"] * self.operatorRepresentation["dim_im_in_y"] self.operatorRepresentation["batchOffsetOut"] = self.operatorRepresentation[ "ch_im_out"] * self.operatorRepresentation["dim_im_out_y"] - return ctxt, True + return newCtxt, True + return ctxt, False
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
Deeploy/Targets/Generic/Parsers.py
(6 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
Deeploy/Targets/Generic/Parsers.py (1)
Deeploy/DeeployTypes.py (7)
parseNode
(1089-1104)parseNodeCtxt
(1107-1132)lookup
(776-808)inputs
(2559-2576)NodeParser
(1079-1254)outputs
(2578-2595)NetworkContext
(564-1076)
🪛 Ruff (0.14.1)
Deeploy/Targets/Generic/Parsers.py
346-346: Do not catch blind exception: Exception
(BLE001)
346-346: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
2204-2205: Avoid specifying long messages outside the exception class
(TRY003)
2647-2647: Unused method argument: channels_first
(ARG002)
2692-2692: Unused method argument: channels_first
(ARG002)
2725-2725: Local variable kernel_shape_y
is assigned to but never used
Remove assignment to unused variable kernel_shape_y
(F841)
🔇 Additional comments (2)
Deeploy/Targets/Generic/Parsers.py (2)
224-264
: LGTM: MaxPool1DParser correctly implements 1D pooling.The parser properly validates 1D constraints (pads length 2, kernel/strides length 1), respects the
channels_first
parameter when extracting dimensions, and correctly returns the updated context.
1390-1393
: LGTM: Batch offset calculations are correctly added.The batch offset calculations for 1D convolution are properly computed as the product of channel and spatial dimensions, which is necessary for batch processing.
|
||
try: | ||
self.operatorRepresentation['pads'] = [int(p) for p in node.attrs['pads']] | ||
except Exception as e: | ||
self.operatorRepresentation['pads'] = node.attrs['pads'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Catch specific exception types instead of blind Exception.
The try-except block catches a generic Exception
and assigns it to an unused variable. This can mask unexpected errors and makes debugging harder.
Apply this diff to improve exception handling:
- try:
- self.operatorRepresentation['pads'] = [int(p) for p in node.attrs['pads']]
- except Exception as e:
- self.operatorRepresentation['pads'] = node.attrs['pads']
+ try:
+ self.operatorRepresentation['pads'] = [int(p) for p in node.attrs['pads']]
+ except (ValueError, TypeError):
+ self.operatorRepresentation['pads'] = node.attrs['pads']
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
try: | |
self.operatorRepresentation['pads'] = [int(p) for p in node.attrs['pads']] | |
except Exception as e: | |
self.operatorRepresentation['pads'] = node.attrs['pads'] | |
try: | |
self.operatorRepresentation['pads'] = [int(p) for p in node.attrs['pads']] | |
except (ValueError, TypeError): | |
self.operatorRepresentation['pads'] = node.attrs['pads'] |
🧰 Tools
🪛 Ruff (0.14.1)
346-346: Do not catch blind exception: Exception
(BLE001)
346-346: Local variable e
is assigned to but never used
Remove assignment to unused variable e
(F841)
🤖 Prompt for AI Agents
In Deeploy/Targets/Generic/Parsers.py around lines 343 to 347, the try/except
currently catches a broad Exception and binds it to an unused variable; change
it to catch only the expected errors (e.g., KeyError, ValueError, TypeError)
when converting node.attrs['pads'] to ints, or pre-validate that 'pads' exists
and is iterable before conversion; in the except branch, do not swallow other
exceptions—assign the original node.attrs['pads'] value as the fallback and log
or re-raise unexpected errors as appropriate.
… Autoencoder1D test to the CI Pipeline
Adding kernels to support a 1D autoencoder running on a generic platform. For each kernel added the following files were created or modified: Parser, Binding, Platform, C files, Layer
Added
Changed
Fixed
.
PR Merge Checklist
devel
commit and pointing todevel
.CHANGELOG.md
file has been updated.