Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LLVM/Test: Add vectorizing testcases for fminimumnum and fminimumnum #133690

Merged
merged 2 commits into from
Apr 1, 2025

Conversation

wzssyqa
Copy link
Contributor

@wzssyqa wzssyqa commented Mar 31, 2025

Vectorizing of fminimumnum and fminimumnum have not support yet. Let's add the testcase for it now, and we will update the testcase when we support it.

Vectorizing of fminimumnum and fminimumnum have not support yet.
Let's add the testcase for it now, and we will update the testcase
when we support it.
@llvmbot
Copy link
Member

llvmbot commented Mar 31, 2025

@llvm/pr-subscribers-llvm-transforms

Author: YunQiang Su (wzssyqa)

Changes

Vectorizing of fminimumnum and fminimumnum have not support yet. Let's add the testcase for it now, and we will update the testcase when we support it.


Patch is 29.32 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/133690.diff

1 Files Affected:

  • (added) llvm/test/Transforms/LoopVectorize/fminimumnum.ll (+489)
diff --git a/llvm/test/Transforms/LoopVectorize/fminimumnum.ll b/llvm/test/Transforms/LoopVectorize/fminimumnum.ll
new file mode 100644
index 0000000000000..66ad9e9a0e5dd
--- /dev/null
+++ b/llvm/test/Transforms/LoopVectorize/fminimumnum.ll
@@ -0,0 +1,489 @@
+; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 5
+; FIXME: fmaximumnum/fminimumnum have no vectorizing support yet.
+; RUN: opt --passes=loop-vectorize --mtriple=riscv64 -mattr="+zvfh,+v" -S < %s | FileCheck %s --check-prefix=RV64
+; RUN: opt --passes=loop-vectorize --mtriple=aarch64 -mattr="+neon" -S < %s | FileCheck %s --check-prefix=ARM64
+; RUN: opt --passes=loop-vectorize --mtriple=x86_64 -S < %s | FileCheck %s --check-prefix=X64
+
+@input1_f32 = global [4096 x float] zeroinitializer, align 4
+@input2_f32 = global [4096 x float] zeroinitializer, align 4
+@output_f32 = global [4096 x float] zeroinitializer, align 4
+@input1_f64 = global [4096 x double] zeroinitializer, align 8
+@input2_f64 = global [4096 x double] zeroinitializer, align 8
+@output_f64 = global [4096 x double] zeroinitializer, align 8
+@input1_f16 = global [4096 x half] zeroinitializer, align 2
+@input2_f16 = global [4096 x half] zeroinitializer, align 2
+@output_f16 = global [4096 x half] zeroinitializer, align 2
+
+define void @f32min()  {
+; RV64-LABEL: define void @f32min(
+; RV64-SAME: ) #[[ATTR0:[0-9]+]] {
+; RV64-NEXT:  [[ENTRY:.*]]:
+; RV64-NEXT:    br label %[[FOR_BODY:.*]]
+; RV64:       [[FOR_COND_CLEANUP:.*]]:
+; RV64-NEXT:    ret void
+; RV64:       [[FOR_BODY]]:
+; RV64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; RV64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input1_f32, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    [[TMP14:%.*]] = load float, ptr [[ARRAYIDX]], align 4
+; RV64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input2_f32, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    [[TMP15:%.*]] = load float, ptr [[ARRAYIDX2]], align 4
+; RV64-NEXT:    [[TMP16:%.*]] = tail call float @llvm.minimumnum.f32(float [[TMP14]], float [[TMP15]])
+; RV64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @output_f32, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    store float [[TMP16]], ptr [[ARRAYIDX4]], align 4
+; RV64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; RV64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; RV64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+; ARM64-LABEL: define void @f32min(
+; ARM64-SAME: ) #[[ATTR0:[0-9]+]] {
+; ARM64-NEXT:  [[ENTRY:.*]]:
+; ARM64-NEXT:    br label %[[FOR_BODY:.*]]
+; ARM64:       [[FOR_COND_CLEANUP:.*]]:
+; ARM64-NEXT:    ret void
+; ARM64:       [[FOR_BODY]]:
+; ARM64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; ARM64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input1_f32, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    [[TMP12:%.*]] = load float, ptr [[ARRAYIDX]], align 4
+; ARM64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input2_f32, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    [[TMP13:%.*]] = load float, ptr [[ARRAYIDX2]], align 4
+; ARM64-NEXT:    [[TMP14:%.*]] = tail call float @llvm.minimumnum.f32(float [[TMP12]], float [[TMP13]])
+; ARM64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @output_f32, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    store float [[TMP14]], ptr [[ARRAYIDX4]], align 4
+; ARM64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; ARM64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; ARM64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+; X64-LABEL: define void @f32min() {
+; X64-NEXT:  [[ENTRY:.*]]:
+; X64-NEXT:    br label %[[FOR_BODY:.*]]
+; X64:       [[FOR_COND_CLEANUP:.*]]:
+; X64-NEXT:    ret void
+; X64:       [[FOR_BODY]]:
+; X64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; X64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input1_f32, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    [[TMP12:%.*]] = load float, ptr [[ARRAYIDX]], align 4
+; X64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input2_f32, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    [[TMP13:%.*]] = load float, ptr [[ARRAYIDX2]], align 4
+; X64-NEXT:    [[TMP14:%.*]] = tail call float @llvm.minimumnum.f32(float [[TMP12]], float [[TMP13]])
+; X64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @output_f32, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    store float [[TMP14]], ptr [[ARRAYIDX4]], align 4
+; X64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; X64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; X64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+entry:
+  br label %for.body
+
+for.cond.cleanup:                                 ; preds = %for.body
+  ret void
+
+for.body:                                         ; preds = %entry, %for.body
+  %indvars.iv = phi i64 [ 0, %entry ], [ %indvars.iv.next, %for.body ]
+  %arrayidx = getelementptr inbounds nuw [4096 x float], ptr @input1_f32, i64 0, i64 %indvars.iv
+  %0 = load float, ptr %arrayidx, align 4
+  %arrayidx2 = getelementptr inbounds nuw [4096 x float], ptr @input2_f32, i64 0, i64 %indvars.iv
+  %1 = load float, ptr %arrayidx2, align 4
+  %2 = tail call float @llvm.minimumnum.f32(float %0, float %1)
+  %arrayidx4 = getelementptr inbounds nuw [4096 x float], ptr @output_f32, i64 0, i64 %indvars.iv
+  store float %2, ptr %arrayidx4, align 4
+  %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1
+  %exitcond.not = icmp eq i64 %indvars.iv.next, 4096
+  br i1 %exitcond.not, label %for.cond.cleanup, label %for.body
+}
+
+declare float @llvm.minimumnum.f32(float, float)
+
+define void @f32max()  {
+; RV64-LABEL: define void @f32max(
+; RV64-SAME: ) #[[ATTR0]] {
+; RV64-NEXT:  [[ENTRY:.*]]:
+; RV64-NEXT:    br label %[[FOR_BODY:.*]]
+; RV64:       [[FOR_COND_CLEANUP:.*]]:
+; RV64-NEXT:    ret void
+; RV64:       [[FOR_BODY]]:
+; RV64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; RV64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input1_f32, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    [[TMP14:%.*]] = load float, ptr [[ARRAYIDX]], align 4
+; RV64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input2_f32, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    [[TMP15:%.*]] = load float, ptr [[ARRAYIDX2]], align 4
+; RV64-NEXT:    [[TMP16:%.*]] = tail call float @llvm.maximumnum.f32(float [[TMP14]], float [[TMP15]])
+; RV64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @output_f32, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    store float [[TMP16]], ptr [[ARRAYIDX4]], align 4
+; RV64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; RV64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; RV64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+; ARM64-LABEL: define void @f32max(
+; ARM64-SAME: ) #[[ATTR0]] {
+; ARM64-NEXT:  [[ENTRY:.*]]:
+; ARM64-NEXT:    br label %[[FOR_BODY:.*]]
+; ARM64:       [[FOR_COND_CLEANUP:.*]]:
+; ARM64-NEXT:    ret void
+; ARM64:       [[FOR_BODY]]:
+; ARM64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; ARM64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input1_f32, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    [[TMP12:%.*]] = load float, ptr [[ARRAYIDX]], align 4
+; ARM64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input2_f32, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    [[TMP13:%.*]] = load float, ptr [[ARRAYIDX2]], align 4
+; ARM64-NEXT:    [[TMP14:%.*]] = tail call float @llvm.maximumnum.f32(float [[TMP12]], float [[TMP13]])
+; ARM64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @output_f32, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    store float [[TMP14]], ptr [[ARRAYIDX4]], align 4
+; ARM64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; ARM64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; ARM64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+; X64-LABEL: define void @f32max() {
+; X64-NEXT:  [[ENTRY:.*]]:
+; X64-NEXT:    br label %[[FOR_BODY:.*]]
+; X64:       [[FOR_COND_CLEANUP:.*]]:
+; X64-NEXT:    ret void
+; X64:       [[FOR_BODY]]:
+; X64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; X64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input1_f32, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    [[TMP12:%.*]] = load float, ptr [[ARRAYIDX]], align 4
+; X64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @input2_f32, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    [[TMP13:%.*]] = load float, ptr [[ARRAYIDX2]], align 4
+; X64-NEXT:    [[TMP14:%.*]] = tail call float @llvm.maximumnum.f32(float [[TMP12]], float [[TMP13]])
+; X64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x float], ptr @output_f32, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    store float [[TMP14]], ptr [[ARRAYIDX4]], align 4
+; X64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; X64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; X64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+entry:
+  br label %for.body
+
+for.cond.cleanup:                                 ; preds = %for.body
+  ret void
+
+for.body:                                         ; preds = %entry, %for.body
+  %indvars.iv = phi i64 [ 0, %entry ], [ %indvars.iv.next, %for.body ]
+  %arrayidx = getelementptr inbounds nuw [4096 x float], ptr @input1_f32, i64 0, i64 %indvars.iv
+  %0 = load float, ptr %arrayidx, align 4
+  %arrayidx2 = getelementptr inbounds nuw [4096 x float], ptr @input2_f32, i64 0, i64 %indvars.iv
+  %1 = load float, ptr %arrayidx2, align 4
+  %2 = tail call float @llvm.maximumnum.f32(float %0, float %1)
+  %arrayidx4 = getelementptr inbounds nuw [4096 x float], ptr @output_f32, i64 0, i64 %indvars.iv
+  store float %2, ptr %arrayidx4, align 4
+  %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1
+  %exitcond.not = icmp eq i64 %indvars.iv.next, 4096
+  br i1 %exitcond.not, label %for.cond.cleanup, label %for.body
+}
+
+declare float @llvm.maximumnum.f32(float, float)
+
+define void @f64min()  {
+; RV64-LABEL: define void @f64min(
+; RV64-SAME: ) #[[ATTR0]] {
+; RV64-NEXT:  [[ENTRY:.*]]:
+; RV64-NEXT:    br label %[[FOR_BODY:.*]]
+; RV64:       [[FOR_COND_CLEANUP:.*]]:
+; RV64-NEXT:    ret void
+; RV64:       [[FOR_BODY]]:
+; RV64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; RV64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input1_f64, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    [[TMP14:%.*]] = load double, ptr [[ARRAYIDX]], align 8
+; RV64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input2_f64, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    [[TMP15:%.*]] = load double, ptr [[ARRAYIDX2]], align 8
+; RV64-NEXT:    [[TMP16:%.*]] = tail call double @llvm.minimumnum.f64(double [[TMP14]], double [[TMP15]])
+; RV64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @output_f64, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    store double [[TMP16]], ptr [[ARRAYIDX4]], align 8
+; RV64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; RV64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; RV64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+; ARM64-LABEL: define void @f64min(
+; ARM64-SAME: ) #[[ATTR0]] {
+; ARM64-NEXT:  [[ENTRY:.*]]:
+; ARM64-NEXT:    br label %[[FOR_BODY:.*]]
+; ARM64:       [[FOR_COND_CLEANUP:.*]]:
+; ARM64-NEXT:    ret void
+; ARM64:       [[FOR_BODY]]:
+; ARM64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; ARM64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input1_f64, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    [[TMP12:%.*]] = load double, ptr [[ARRAYIDX]], align 8
+; ARM64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input2_f64, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    [[TMP13:%.*]] = load double, ptr [[ARRAYIDX2]], align 8
+; ARM64-NEXT:    [[TMP14:%.*]] = tail call double @llvm.minimumnum.f64(double [[TMP12]], double [[TMP13]])
+; ARM64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @output_f64, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    store double [[TMP14]], ptr [[ARRAYIDX4]], align 8
+; ARM64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; ARM64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; ARM64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+; X64-LABEL: define void @f64min() {
+; X64-NEXT:  [[ENTRY:.*]]:
+; X64-NEXT:    br label %[[FOR_BODY:.*]]
+; X64:       [[FOR_COND_CLEANUP:.*]]:
+; X64-NEXT:    ret void
+; X64:       [[FOR_BODY]]:
+; X64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; X64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input1_f64, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    [[TMP12:%.*]] = load double, ptr [[ARRAYIDX]], align 8
+; X64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input2_f64, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    [[TMP13:%.*]] = load double, ptr [[ARRAYIDX2]], align 8
+; X64-NEXT:    [[TMP14:%.*]] = tail call double @llvm.minimumnum.f64(double [[TMP12]], double [[TMP13]])
+; X64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @output_f64, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    store double [[TMP14]], ptr [[ARRAYIDX4]], align 8
+; X64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; X64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; X64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+entry:
+  br label %for.body
+
+for.cond.cleanup:                                 ; preds = %for.body
+  ret void
+
+for.body:                                         ; preds = %entry, %for.body
+  %indvars.iv = phi i64 [ 0, %entry ], [ %indvars.iv.next, %for.body ]
+  %arrayidx = getelementptr inbounds nuw [4096 x double], ptr @input1_f64, i64 0, i64 %indvars.iv
+  %0 = load double, ptr %arrayidx, align 8
+  %arrayidx2 = getelementptr inbounds nuw [4096 x double], ptr @input2_f64, i64 0, i64 %indvars.iv
+  %1 = load double, ptr %arrayidx2, align 8
+  %2 = tail call double @llvm.minimumnum.f64(double %0, double %1)
+  %arrayidx4 = getelementptr inbounds nuw [4096 x double], ptr @output_f64, i64 0, i64 %indvars.iv
+  store double %2, ptr %arrayidx4, align 8
+  %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1
+  %exitcond.not = icmp eq i64 %indvars.iv.next, 4096
+  br i1 %exitcond.not, label %for.cond.cleanup, label %for.body
+}
+
+declare double @llvm.minimumnum.f64(double, double)
+
+define void @f64max()  {
+; RV64-LABEL: define void @f64max(
+; RV64-SAME: ) #[[ATTR0]] {
+; RV64-NEXT:  [[ENTRY:.*]]:
+; RV64-NEXT:    br label %[[FOR_BODY:.*]]
+; RV64:       [[FOR_COND_CLEANUP:.*]]:
+; RV64-NEXT:    ret void
+; RV64:       [[FOR_BODY]]:
+; RV64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; RV64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input1_f64, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    [[TMP14:%.*]] = load double, ptr [[ARRAYIDX]], align 8
+; RV64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input2_f64, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    [[TMP15:%.*]] = load double, ptr [[ARRAYIDX2]], align 8
+; RV64-NEXT:    [[TMP16:%.*]] = tail call double @llvm.maximumnum.f64(double [[TMP14]], double [[TMP15]])
+; RV64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @output_f64, i64 0, i64 [[INDVARS_IV]]
+; RV64-NEXT:    store double [[TMP16]], ptr [[ARRAYIDX4]], align 8
+; RV64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; RV64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; RV64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+; ARM64-LABEL: define void @f64max(
+; ARM64-SAME: ) #[[ATTR0]] {
+; ARM64-NEXT:  [[ENTRY:.*]]:
+; ARM64-NEXT:    br label %[[FOR_BODY:.*]]
+; ARM64:       [[FOR_COND_CLEANUP:.*]]:
+; ARM64-NEXT:    ret void
+; ARM64:       [[FOR_BODY]]:
+; ARM64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; ARM64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input1_f64, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    [[TMP12:%.*]] = load double, ptr [[ARRAYIDX]], align 8
+; ARM64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input2_f64, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    [[TMP13:%.*]] = load double, ptr [[ARRAYIDX2]], align 8
+; ARM64-NEXT:    [[TMP14:%.*]] = tail call double @llvm.maximumnum.f64(double [[TMP12]], double [[TMP13]])
+; ARM64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @output_f64, i64 0, i64 [[INDVARS_IV]]
+; ARM64-NEXT:    store double [[TMP14]], ptr [[ARRAYIDX4]], align 8
+; ARM64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; ARM64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; ARM64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+; X64-LABEL: define void @f64max() {
+; X64-NEXT:  [[ENTRY:.*]]:
+; X64-NEXT:    br label %[[FOR_BODY:.*]]
+; X64:       [[FOR_COND_CLEANUP:.*]]:
+; X64-NEXT:    ret void
+; X64:       [[FOR_BODY]]:
+; X64-NEXT:    [[INDVARS_IV:%.*]] = phi i64 [ 0, %[[ENTRY]] ], [ [[INDVARS_IV_NEXT:%.*]], %[[FOR_BODY]] ]
+; X64-NEXT:    [[ARRAYIDX:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input1_f64, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    [[TMP12:%.*]] = load double, ptr [[ARRAYIDX]], align 8
+; X64-NEXT:    [[ARRAYIDX2:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @input2_f64, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    [[TMP13:%.*]] = load double, ptr [[ARRAYIDX2]], align 8
+; X64-NEXT:    [[TMP14:%.*]] = tail call double @llvm.maximumnum.f64(double [[TMP12]], double [[TMP13]])
+; X64-NEXT:    [[ARRAYIDX4:%.*]] = getelementptr inbounds nuw [4096 x double], ptr @output_f64, i64 0, i64 [[INDVARS_IV]]
+; X64-NEXT:    store double [[TMP14]], ptr [[ARRAYIDX4]], align 8
+; X64-NEXT:    [[INDVARS_IV_NEXT]] = add nuw nsw i64 [[INDVARS_IV]], 1
+; X64-NEXT:    [[EXITCOND_NOT:%.*]] = icmp eq i64 [[INDVARS_IV_NEXT]], 4096
+; X64-NEXT:    br i1 [[EXITCOND_NOT]], label %[[FOR_COND_CLEANUP]], label %[[FOR_BODY]]
+;
+entry:
+  br label %for.body
+
+for.cond.cleanup:                                 ; preds = %for.body
+  ret void
+
+for.body:                                         ; preds = %entry, %for.body
+  %indvars.iv = phi i64 [ 0, %entry ], [ %indvars.iv.next, %for.body ]
+  %arrayidx = getelementptr inbounds nuw [4096 x double], ptr @input1_f64, i64 0, i64 %indvars.iv
+  %0 = load double, ptr %arrayidx, align 8
+  %arrayidx2 = getelementptr inbounds nuw [4096 x double], ptr @input2_f64, i64 0, i64 %indvars.iv
+  %1 = load double, ptr %arrayidx2, align 8
+  %2 = tail call double @llvm.maximumnum.f64(double %0, double %1)
+  %arrayidx4 = getelementptr inbounds nuw [4096 x double], ptr @output_f64, i64 0, i64 %indvars.iv
+  store double %2, ptr %arrayidx4, align 8
+  %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1
+  %exitcond.not = icmp eq i64 %indvars.iv.next, 4096
+  br i1 %exitcond.not, label %for.cond.cleanup, label %f...
[truncated]

@wzssyqa wzssyqa merged commit de053bb into llvm:main Apr 1, 2025
12 checks passed
Copy link
Contributor

@arsenm arsenm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also test SLP vectorizer

wzssyqa added a commit that referenced this pull request Apr 1, 2025
@wzssyqa
Copy link
Contributor Author

wzssyqa commented Apr 1, 2025

#133843

Comment on lines +7 to +15
@input1_f32 = global [4096 x float] zeroinitializer, align 4
@input2_f32 = global [4096 x float] zeroinitializer, align 4
@output_f32 = global [4096 x float] zeroinitializer, align 4
@input1_f64 = global [4096 x double] zeroinitializer, align 8
@input2_f64 = global [4096 x double] zeroinitializer, align 8
@output_f64 = global [4096 x double] zeroinitializer, align 8
@input1_f16 = global [4096 x half] zeroinitializer, align 2
@input2_f16 = global [4096 x half] zeroinitializer, align 2
@output_f16 = global [4096 x half] zeroinitializer, align 2
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Simpler to just pass pointers to the functions?

ret void

for.body: ; preds = %entry, %for.body
%indvars.iv = phi i64 [ 0, %entry ], [ %indvars.iv.next, %for.body ]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

strip indvars. prefix for more compact names.

Comment on lines +77 to +78
for.cond.cleanup: ; preds = %for.body
ret void
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

name something more compact, like exit and move to end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants