Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IGNITE-23192 Sql. Arithmetic operations failed with "out of range" exception #4422

Open
wants to merge 27 commits into
base: main
Choose a base branch
from

Conversation

zstan
Copy link
Contributor

@zstan zstan commented Sep 19, 2024

Thank you for submitting the pull request.

To streamline the review process of the patch and ensure better code quality
we ask both an author and a reviewer to verify the following:

The Review Checklist

  • Formal criteria: TC status, codestyle, mandatory documentation. Also make sure to complete the following:
    - There is a single JIRA ticket related to the pull request.
    - The web-link to the pull request is attached to the JIRA ticket.
    - The JIRA ticket has the Patch Available state.
    - The description of the JIRA ticket explains WHAT was made, WHY and HOW.
    - The pull request title is treated as the final commit message. The following pattern must be used: IGNITE-XXXX Change summary where XXXX - number of JIRA issue.
  • Design: new code conforms with the design principles of the components it is added to.
  • Patch quality: patch cannot be split into smaller pieces, its size must be reasonable.
  • Code quality: code is clean and readable, necessary developer documentation is added if needed.
  • Tests code quality: test set covers positive/negative scenarios, happy/edge cases. Tests are effective in terms of execution time and resources.

Notes

@@ -519,7 +519,7 @@ public async Task TestCustomDecimalScale()
await using var resultSet = await Client.Sql.ExecuteAsync(null, "select cast((10 / ?) as decimal(20, 5))", 3m);
IIgniteTuple res = await resultSet.SingleAsync();

Assert.AreEqual(3.33333m, res[0]);
Assert.AreEqual(3.3m, res[0]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's should be either 3.00000 or 3.33333, but not 3.3

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@@ -217,7 +217,7 @@ void testExecuteAsyncDdlDml() {
assertEquals(10, rows.size());
assertEquals("hello 1", rows.get(1).stringValue(0));
assertEquals(1, rows.get(1).intValue(1));
assertEquals(2, rows.get(1).intValue(2));
assertEquals(2, rows.get(1).longValue(2));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
assertEquals(2, rows.get(1).longValue(2));
assertEquals(2L, rows.get(1).longValue(2));

?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -519,7 +519,7 @@ public async Task TestCustomDecimalScale()
await using var resultSet = await Client.Sql.ExecuteAsync(null, "select cast((10 / ?) as decimal(20, 5))", 3m);
IIgniteTuple res = await resultSet.SingleAsync();

Assert.AreEqual(3.33333m, res[0]);
Assert.AreEqual(3.3m, res[0]);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will be fixed soon

}

private static Stream<Arguments> decimalOverflows() {
return Stream.of(
// BIGINT
arguments(SqlTypeName.BIGINT, "SELECT 9223372036854775807 + 1", EMPTY_PARAM),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you decide to remove such cases? It looks as valid testcases

// arguments("SELECT -32768::SMALLINT/-1::SMALLINT", "32768"),
// arguments("SELECT -128::TINYINT/-1::TINYINT", "128")

arguments("SELECT CAST(-? AS BIGINT)/-1", "9223372036854775808", "9223372036854775808"),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the test doesn't fail with overflow?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i not understand why it need to fail ? Overflow need to operate with type, i didn`t see ant type here, did i miss smth ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

most of the cases here should fail due to overflow, it looks incorrect now


/** Returns the multiply of its arguments, extending result type for overflow avoidance. */
public static BigDecimal multiply(BigDecimal x, BigDecimal y) {
int maxPrecision = Commons.cluster().getTypeFactory().getTypeSystem().getMaxPrecision(SqlTypeName.DECIMAL);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Standard (see 6.29) says for the case the following:
The precision of the result of multiplication is implementation-defined, and the scale is S1 + S2.

Copy link
Contributor

@korlov42 korlov42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add more tests to the patch:

  1. planning tests to make sure derived result type for math operation is as expected for every type pair (you may extend NumericBinaryOperationsTypeCoercionTest fro this)
  2. test on changes in TypeSystem
  3. execution test covering all math operation for all type pairs

if (isIntType(type1) && isIntType(type2)) {
boolean typesNullability = type1.isNullable() || type2.isNullable();

switch (type1.getSqlTypeName()) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of copy-pasting all these switch cases it's better to do following:

  1. get least restrictive type among arguments (typeFactory.leastRestrictive(List.of(type1, type2)))
  2. get next type for the result type from p1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changes in type systems must be covered IgniteTypeSystemTest test

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

append test, IgniteTypeSystemTest#deriveType

ptupitsyn and others added 7 commits September 23, 2024 17:39
Pass the original address to `NettyClientConnection` to avoid using `channel.remoteAddress()`, which can return null on disconnect. We want have the address for logging and debugging purposes even after disconnect.
…pirationTime may unbox a null value and cause NPE (apache#4425)
@zstan
Copy link
Contributor Author

zstan commented Sep 23, 2024

@korlov42

  1. I append tests into IgniteTypeSystemTest
  2. Fix IgniteTypeSystem issue
  3. But executions tests already implemented in IGNITE-23141 and just wait this issue to be resolved

Copy link
Contributor

@korlov42 korlov42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I append tests into IgniteTypeSystemTest
  2. Fix IgniteTypeSystem issue

tests of IgniteTypeSystem was in p2. We need planner tests as well

@@ -519,7 +519,7 @@ public async Task TestCustomDecimalScale()
await using var resultSet = await Client.Sql.ExecuteAsync(null, "select cast((10 / ?) as decimal(20, 5))", 3m);
IIgniteTuple res = await resultSet.SingleAsync();

Assert.AreEqual(3.33333m, res[0]);
Assert.AreEqual(3.00000m, res[0]);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you explain why the result has changed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, i can, previously we call SqlFunctions#divide() and it implementation is a bit diff from current:
calcite : b0.divide(b1, MathContext.DECIMAL64)
current: x.divide(y, RoundingMode.HALF_DOWN)
i can change it like calcite but test "test_correlated_any_all.test" will fail in such a case and as for me this is not hack from my side, check: we create accumulator MinMaxAccumulator with RelDataType as a parameter, but also if scale of such a type is == 0 input data still can has some fractional data which brings errors

arguments(SqlTypeName.INTEGER, "SELECT -2147483648/-1", EMPTY_PARAM),
arguments(SqlTypeName.INTEGER, "select CAST(9223372036854775807.5 + 9223372036854775807.5 AS INTEGER)",
EMPTY_PARAM),
arguments(SqlTypeName.INTEGER, "SELECT -CAST(? AS INTEGER)", -2147483648),

// SMALLINT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to have similar test cases for all 4 types ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

derived type for SMALLINT and TINYINT tests without explicit casting is INTEGER so - no overflow will raised here


@ParameterizedTest
@MethodSource("decimalOpTypeExtension")
public void testCalcOpDynParamOverflow(String expr, String expect, Object param) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why did you name this test 'overflow' when no overflow is expected?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

arguments("SELECT CAST(-? AS BIGINT) * -1", "9223372036854775808", "9223372036854775808"),
arguments("SELECT CAST(-? AS INTEGER) * -1", "2147483648", "2147483648"),
arguments("SELECT CAST(-? AS SMALLINT) * -1", "32768", "32768"),
arguments("SELECT CAST(-? AS TINYINT) * -1", "128", "128")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to add test cases not only with cast from string?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

other tests are moved into "cast_to_integer.test"

return typeFactory.createTypeWithNullability(
typeFactory.createSqlType(
SqlTypeName.DECIMAL,
typeFactory.getTypeSystem().getMaxPrecision(SqlTypeName.DECIMAL),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it make sense to use minimal yet sufficient precision rather then maximum possible value?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I change it and some tests will fail, for example "select1.test", if it not "MaxPrecision(SqlTypeName.DECIMAL)" results will return with non zero scale "177.00000" vs "177" and it not expected, i can`t found the problem for a shot time, probably we need additional issue here, wdyt ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will return with non zero scale "177.00000" vs "177" and it not expected

why do you say "it's not expected"? it's literally what you've changed in this patch :) Before this patch chain of math operation results in a least restrictive type among operands, now result type is bumped every time. Given an expression like this (a+b+c+d+e)/5 (from the first failed statement from mentioned script), before this patch final operation was (int) / (int) which results in another int. Now it's (decimal) / (int), and to know resulting type you should check org.apache.calcite.rel.type.RelDataTypeSystem#deriveDecimalDivideType

@zstan
Copy link
Contributor Author

zstan commented Sep 25, 2024

@korlov42 planner tests extended, check NumericBinaryOperationsTypeCoercionTest -> mathResultMatcher

Arguments.of(NativeTypes.INT32, NativeTypes.INT64, NativeTypes.INT64),

Arguments.of(NativeTypes.INT64, NativeTypes.INT8,
NativeTypes.decimalOf(typeSystem.getMaxPrecision(SqlTypeName.DECIMAL), 0)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's introduce constant for DECIMAL_MAX_0

}


private static Stream<Arguments> deriveMultTypeArguments() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

argument for multiplication and addition seems to be identical. Does it make sense leave only one set of arguments?

return typeFactory.createTypeWithNullability(
typeFactory.createSqlType(
SqlTypeName.DECIMAL,
typeFactory.getTypeSystem().getMaxPrecision(SqlTypeName.DECIMAL),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will return with non zero scale "177.00000" vs "177" and it not expected

why do you say "it's not expected"? it's literally what you've changed in this patch :) Before this patch chain of math operation results in a least restrictive type among operands, now result type is bumped every time. Given an expression like this (a+b+c+d+e)/5 (from the first failed statement from mentioned script), before this patch final operation was (int) / (int) which results in another int. Now it's (decimal) / (int), and to know resulting type you should check org.apache.calcite.rel.type.RelDataTypeSystem#deriveDecimalDivideType

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants