Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stride validation fails for empty tensor input #398

Open
parthchadha opened this issue Nov 21, 2024 · 2 comments
Open

Stride validation fails for empty tensor input #398

parthchadha opened this issue Nov 21, 2024 · 2 comments
Assignees
Labels
mlir-tensorrt Pull request for the mlir-tensorrt project

Comments

@parthchadha
Copy link
Collaborator

For the below IR at runtime we get the following error:

summary = 'InvalidArgument: InvalidArgument: Input argument 0 validation failed against corresponding function signature arg 0. Reason: InvalidArgument: Runtime stride mismatch. Expected [-9223372036854775808, 1] but received [0, 0]'
module @ins_t_outs_t251_t252_t253_t254_t255_20 {
  func.func @main(%arg0: tensor<3x0xf32> {tensorrt.shape_profile = #tensorrt.shape_profile<min = [3, 0], opt = [3, 0], max = [3, 0]>}) -> (tensor<?x?xf32>, tensor<?x?xf32>, tensor<?x?xf32>, tensor<?x?xf32>, tensor<?x?xf32>) {
    %c = stablehlo.constant dense<0> : tensor<1xi32>
    %c_0 = stablehlo.constant dense<3> : tensor<i32>
    %c_1 = stablehlo.constant dense<1> : tensor<1xi32>
    %c_2 = stablehlo.constant dense<3> : tensor<1xi32>
    %c_3 = stablehlo.constant dense<0> : tensor<i32>
    %c_4 = stablehlo.constant dense<0> : tensor<1xi32>
    %0 = stablehlo.concatenate %c_2, %c_4, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %c_5 = stablehlo.constant dense<2> : tensor<1xi32>
    %1 = stablehlo.real_dynamic_slice %0, %c_1, %c_5, %c_1 : (tensor<2xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<?xi32>
    %c_6 = stablehlo.constant dense<5> : tensor<1xi32>
    %2 = stablehlo.divide %1, %c_6 : (tensor<?xi32>, tensor<1xi32>) -> tensor<1xi32>
    %3 = stablehlo.multiply %2, %c : tensor<1xi32>
    %4 = stablehlo.concatenate %c, %3, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %5 = stablehlo.real_dynamic_slice %0, %c, %c_1, %c_1 : (tensor<2xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<?xi32>
    %6 = stablehlo.multiply %2, %c_1 : tensor<1xi32>
    %7 = stablehlo.concatenate %5, %6, dim = 0 : (tensor<?xi32>, tensor<1xi32>) -> tensor<2xi32>
    %8 = stablehlo.concatenate %c_1, %c_1, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %9 = stablehlo.real_dynamic_slice %arg0, %4, %7, %8 : (tensor<3x0xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>) -> tensor<?x?xf32>
    %10 = stablehlo.multiply %2, %c_1 : tensor<1xi32>
    %11 = stablehlo.concatenate %c, %10, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %12 = stablehlo.real_dynamic_slice %0, %c, %c_1, %c_1 : (tensor<2xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<?xi32>
    %13 = stablehlo.multiply %2, %c_5 : tensor<1xi32>
    %14 = stablehlo.concatenate %12, %13, dim = 0 : (tensor<?xi32>, tensor<1xi32>) -> tensor<2xi32>
    %15 = stablehlo.concatenate %c_1, %c_1, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %16 = stablehlo.real_dynamic_slice %arg0, %11, %14, %15 : (tensor<3x0xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>) -> tensor<?x?xf32>
    %17 = stablehlo.multiply %2, %c_5 : tensor<1xi32>
    %18 = stablehlo.concatenate %c, %17, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %19 = stablehlo.real_dynamic_slice %0, %c, %c_1, %c_1 : (tensor<2xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<?xi32>
    %c_7 = stablehlo.constant dense<3> : tensor<1xi32>
    %20 = stablehlo.multiply %2, %c_7 : tensor<1xi32>
    %21 = stablehlo.concatenate %19, %20, dim = 0 : (tensor<?xi32>, tensor<1xi32>) -> tensor<2xi32>
    %22 = stablehlo.concatenate %c_1, %c_1, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %23 = stablehlo.real_dynamic_slice %arg0, %18, %21, %22 : (tensor<3x0xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>) -> tensor<?x?xf32>
    %24 = stablehlo.multiply %2, %c_7 : tensor<1xi32>
    %25 = stablehlo.concatenate %c, %24, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %26 = stablehlo.real_dynamic_slice %0, %c, %c_1, %c_1 : (tensor<2xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<?xi32>
    %c_8 = stablehlo.constant dense<4> : tensor<1xi32>
    %27 = stablehlo.multiply %2, %c_8 : tensor<1xi32>
    %28 = stablehlo.concatenate %26, %27, dim = 0 : (tensor<?xi32>, tensor<1xi32>) -> tensor<2xi32>
    %29 = stablehlo.concatenate %c_1, %c_1, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %30 = stablehlo.real_dynamic_slice %arg0, %25, %28, %29 : (tensor<3x0xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>) -> tensor<?x?xf32>
    %31 = stablehlo.multiply %2, %c_8 : tensor<1xi32>
    %32 = stablehlo.concatenate %c, %31, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %33 = stablehlo.real_dynamic_slice %0, %c, %c_1, %c_1 : (tensor<2xi32>, tensor<1xi32>, tensor<1xi32>, tensor<1xi32>) -> tensor<?xi32>
    %34 = stablehlo.multiply %2, %c_6 : tensor<1xi32>
    %35 = stablehlo.concatenate %33, %34, dim = 0 : (tensor<?xi32>, tensor<1xi32>) -> tensor<2xi32>
    %36 = stablehlo.concatenate %c_1, %c_1, dim = 0 : (tensor<1xi32>, tensor<1xi32>) -> tensor<2xi32>
    %37 = stablehlo.real_dynamic_slice %arg0, %32, %35, %36 : (tensor<3x0xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>) -> tensor<?x?xf32>
    return %9, %16, %23, %30, %37 : tensor<?x?xf32>, tensor<?x?xf32>, tensor<?x?xf32>, tensor<?x?xf32>, tensor<?x?xf32>
  }
}
@parthchadha parthchadha added the mlir-tensorrt Pull request for the mlir-tensorrt project label Nov 21, 2024
@shelkesagar29 shelkesagar29 self-assigned this Nov 22, 2024
@pranavm-nvidia pranavm-nvidia changed the title Bug: Stride validation fails for empty tensor input Stride validation fails for empty tensor input Nov 26, 2024
@akhilg-nv
Copy link
Collaborator

Fixed last week

@pranavm-nvidia
Copy link
Collaborator

This seems to be failing again with the TRT dialect. Re-opening so we can investigate.

@pranavm-nvidia pranavm-nvidia reopened this Mar 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mlir-tensorrt Pull request for the mlir-tensorrt project
Projects
None yet
Development

No branches or pull requests

4 participants