Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI][Bench] Use new version of bench dashboard #1212

Merged
merged 1 commit into from
Mar 21, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 11 additions & 7 deletions .github/workflows/benchmarks.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,22 @@ on:
description: PR number (if 0, it'll run on the main)
type: number
bench_script_params:
# If you want to save the results of the manual run in 'benchmark-results' branch,
# you have to pass '--save XXX', where XXX is the label of your results.
description: Parameters passed to script executing benchmark
type: string
required: false
default: ''
upload_report:
description: 'Upload HTML report'
type: boolean
required: false
default: false
runner:
description: Runner
type: choice
required: true
default: 'L0_PERF'
options:
- L0_PERF

permissions:
contents: read
contents: write
pull-requests: write

jobs:
Expand All @@ -28,4 +32,4 @@ jobs:
with:
pr_no: ${{ inputs.pr_no }}
bench_script_params: ${{ inputs.bench_script_params }}
upload_report: ${{ inputs.upload_report }}
runner: ${{ inputs.runner }}
7 changes: 3 additions & 4 deletions .github/workflows/nightly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -248,9 +248,9 @@ jobs:
call "C:\Program Files (x86)\Intel\oneAPI\setvars-vcvarsall.bat"
ctest -C ${{matrix.build_type}} --output-on-failure --test-dir test

hwloc-fallback:
# Scenarios where UMF_LINK_HWLOC_STATICALLY is set to OFF and hwloc is not installed in the system
# The hwloc library is fetched implicitly
hwloc-fallback:
name: "Fallback to static hwloc build"
strategy:
matrix:
Expand Down Expand Up @@ -317,9 +317,8 @@ jobs:
Benchmarks:
uses: ./.github/workflows/reusable_benchmarks.yml
permissions:
contents: read
contents: write
pull-requests: write
with:
pr_no: '0'
bench_script_params: '--save baseline'
upload_report: true
bench_script_params: '--save Baseline_PVC'
113 changes: 70 additions & 43 deletions .github/workflows/reusable_benchmarks.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Executes benchmarks implemented in this repository using scripts
# for results visualization from intel/llvm (unified-runtime dir).
# for results visualization from intel/llvm.
name: Benchmarks

on:
Expand All @@ -14,13 +14,13 @@ on:
required: false
type: string
default: ''
upload_report:
runner:
required: false
type: boolean
default: false
type: string
default: 'L0_PERF'

permissions:
contents: read
contents: write
pull-requests: write

env:
Expand All @@ -32,17 +32,9 @@ jobs:
name: Benchmarks
# run only on upstream; forks will not have the HW
if: github.repository == 'oneapi-src/unified-memory-framework'
runs-on: L0_PERF
runs-on: ${{ inputs.runner }}

steps:
# Workspace on self-hosted runners is not cleaned automatically.
# We have to delete the files created outside of using actions.
- name: Cleanup self-hosted workspace
if: always()
run: |
ls -la ./
rm -rf ./* || true

- name: Add comment to PR
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
if: ${{ always() && inputs.pr_no != 0 }}
Expand Down Expand Up @@ -97,23 +89,32 @@ jobs:
- name: Build UMF
run: cmake --build ${{env.BUILD_DIR}} -j $(nproc)

# Get scripts for benchmark data visualization.
# Use specific tag, as the scripts or files' location may change.
- name: Checkout SYCL
- name: Checkout UMF results branch
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: benchmark-results
path: results-repo

# Get scripts for benchmark data visualization (from SYCL repo).
# Use specific ref, as the scripts or files' location may change.
- name: Checkout benchmark scripts
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
repository: intel/llvm
# [BENCHMARK] fix default timeout parameter
# https://github.com/intel/llvm/pull/17412
ref: 357e9e0b253b7eba105d044e38452b3c09169f8a
path: sycl-repo
fetch-depth: 1
# Note: The same ref is used in docs build (for dashboard generation)!
#
# 20.03.2025
# branch: unify-benchmark-ci
ref: cae7049c78c697b3ac94f931716d9efb53addcd8
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made a few updates today, so you could update if you want.

path: sc
sparse-checkout: |
devops/scripts/benchmarks

- name: Install benchmarking scripts deps
run: |
python -m venv .venv
source .venv/bin/activate
pip install -r ${{github.workspace}}/sycl-repo/unified-runtime/third_party/benchmark_requirements.txt
pip install -r ${{github.workspace}}/sc/devops/scripts/benchmarks/requirements.txt

- name: Set core range and GPU mask
run: |
Expand All @@ -135,22 +136,21 @@ jobs:

- name: Run UMF benchmarks
id: benchmarks
working-directory: ${{env.BUILD_DIR}}
run: >
source ${{github.workspace}}/.venv/bin/activate &&
taskset -c ${{ env.CORES }} ${{ github.workspace }}/sycl-repo/unified-runtime/scripts/benchmarks/main.py
source .venv/bin/activate &&
taskset -c ${{ env.CORES }} ./sc/devops/scripts/benchmarks/main.py
~/bench_workdir_umf
--umf ${{env.BUILD_DIR}}
--compare baseline
--timeout 3000
${{ inputs.upload_report && '--output-html' || '' }}
${{ inputs.pr_no != 0 && '--output-markdown' || '' }}
--output-html remote
--results-dir ${{ github.workspace }}/results-repo
--output-markdown
${{ inputs.bench_script_params }}

# In case it failed to add a comment, we can still print the results.
- name: Print benchmark results
if: ${{ always() && inputs.pr_no != 0 }}
run: cat ${{env.BUILD_DIR}}/benchmark_results.md
if: ${{ always() }}
run: cat ${{ github.workspace }}/benchmark_results.md || true

- name: Add comment to PR
uses: actions/github-script@60a0d83039c74a4aee543508d2ffcb1c3799cdea # v7.0.1
Expand All @@ -160,7 +160,7 @@ jobs:
let markdown = ""
try {
const fs = require('fs');
markdown = fs.readFileSync('${{env.BUILD_DIR}}/benchmark_results.md', 'utf8');
markdown = fs.readFileSync('${{ github.workspace }}/benchmark_results.md', 'utf8');
} catch(err) {
}

Expand All @@ -177,15 +177,42 @@ jobs:
repo: context.repo.repo,
body: body
})

- name: Upload HTML report
if: ${{ always() && inputs.upload_report }}
uses: actions/cache/save@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
with:
path: umf-repo/build/benchmark_results.html
key: benchmark-results-${{ github.run_id }}

- name: Get information about platform
if: ${{ always() }}
working-directory: ${{env.UMF_DIR}}
run: .github/scripts/get_system_info.sh
- name: Commit data.json and results directory
working-directory: results-repo
run: |
git config --global user.name "GitHub Actions Bot"
git config --global user.email "[email protected]"

for attempt in {1..5}; do
echo "Attempt #$attempt to push changes"

rm -f data.json
cp ${{ github.workspace }}/sc/devops/scripts/benchmarks/html/data.json .

git add data.json results/
git commit -m "Add benchmark results and data.json"

results_file=$(git diff HEAD~1 --name-only -- results/ | head -n 1)

if git push origin benchmark-results; then
echo "Push succeeded"
break
fi

echo "Push failed, retrying..."

if [ -n "$results_file" ]; then
mv $results_file ${{ github.workspace }}/temp_$(basename $results_file)

git reset --hard origin/benchmark-results
git pull origin benchmark-results

new_file="results/$(basename "$results_file")"
mv ${{ github.workspace }}/temp_$(basename $results_file) $new_file
fi

echo "Regenerating data.json"
(cd ${{ github.workspace }} && ${{ github.workspace }}/sc/devops/scripts/benchmarks/main.py ~/bench_workdir_umf --dry-run --results-dir ${{ github.workspace }}/results-repo --output-html remote)

done
36 changes: 27 additions & 9 deletions .github/workflows/reusable_docs_build.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,19 +45,37 @@ jobs:
-DUMF_DISABLE_HWLOC=ON
cmake --build build --target docs

# If we upload HTML docs, we want to include benchmark results as well
- name: Download benchmark HTML before uploading docs
#
# Documentation is built. Now we want to add benchmark dashboard.
# We only do it if inputs.upload is set, as this job is also used for testing docs build.
#
- name: Checkout benchmark scripts
if: ${{ inputs.upload == true }}
id: download-bench-html
uses: actions/cache/restore@1bd1e32a3bdc45362d1e726936510720a7c30a57 # v4.2.0
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
path: umf-repo/build/benchmark_results.html
key: benchmark-results-
repository: intel/llvm
# 20.03.2025
# branch: unify-benchmark-ci
ref: cae7049c78c697b3ac94f931716d9efb53addcd8
path: sc
sparse-checkout: |
devops/scripts/benchmarks

- name: Move benchmark HTML
if: ${{ inputs.upload == true && steps.download-bench-html.outputs.cache-hit != '' }}
- name: Move benchmark HTML files
if: ${{ inputs.upload == true }}
working-directory: ${{ github.workspace }}/build/docs_build/generated/html
run: |
mkdir performance
mv ${{ github.workspace }}/sc/devops/scripts/benchmarks/html/* performance/

- name: Replace config.js
if: ${{ inputs.upload == true }}
working-directory: ${{ github.workspace }}/build/docs_build/generated/html
run: |
mv umf-repo/build/benchmark_results.html ${{github.workspace}}/build/docs_build/generated/html
cat << 'EOF' > ./performance/config.js
remoteDataUrl = 'https://raw.githubusercontent.com/oneapi-src/unified-memory-framework/refs/heads/benchmark-results/data.json';
defaultCompareNames = ["Baseline_PVC"];
EOF

- name: Upload artifact
if: ${{ inputs.upload == true }}
Expand Down
Loading