-
Notifications
You must be signed in to change notification settings - Fork 22
ci: Add GHA workflow executing benchmarks #422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| cargo bench --package resolc --bench compile -- --save-baseline main_resolc | ||
| cargo bench --package revive-yul --bench parse -- --save-baseline main_yul_parse | ||
| cargo bench --package revive-yul --bench lower -- --save-baseline main_yul_lower | ||
| timeout-minutes: 20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could also remove the timeout-minutes and add it only if we notice unusually long runs. (This took about 6-7min on M4 Pro.)
kvpanch
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it known if CI machine is good for such perf testing ? For example, can CI machine run several tasks in parallel ? If so, perf results might not be stable and it might be required to run such perf testing on a dedicated machine
| git fetch origin main | ||
| git checkout -f main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO, that makes sense to parameterize it by allowing to specify branch or tag to test against. This will help to nicely measure improvements/regressions over some release.
Obviously, that can be done later, not in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That can be a neat and valuable feature, thanks. Can add in a follow-up 👍
Regarding your other comment and concurrency, jobs and workflows (not steps within jobs) can run concurrently by default. Which, thanks for the reminder, made me add a concurrency group where this workflow only runs one at a time for the given PR branch (and cancels in-progress ones if it's triggered again while in progress).
I'd have to look up whether another CI machine is still more optimial for perf testing tho.
xermicus
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm overall
|
|
||
| jobs: | ||
| benchmark: | ||
| runs-on: ubuntu-24.04 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to use the same host we benchmark the polkadot SDK with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like different ones are being used for benches in the Polkadot SDK, but two common ones are e.g. parity-large and parity-benchmark (both enabled in our revive repo). The differences can be seen here.
| - name: Checkout PR Branch | ||
| uses: actions/checkout@v4 | ||
| with: | ||
| ref: ${{ github.event.workflow_run.head_sha }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we checking out the PR branch here only to checkout the main branch first and then switch to the PR branch again?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Essentially due to the get-solc and get-llvm actions used first. I figured we'd want to use these action files from the PR branch in case those were modified as part of the PR. An alternative would be to run those steps twice (once on main and then on the PR branch after switching).
| types: [completed] | ||
|
|
||
| concurrency: | ||
| group: ${{ github.workflow }}-${{ github.event.workflow_run.head_branch }} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd appreciate a comment here why this is needed and what it does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can add that it cancels in-progress runs (for runs on the same branch as the PR) if the workflow is re-triggered during a current run (by e.g. another push to the PR and successful Test workflow run).
| ## Parse Yul | ||
|
|
||
| <details> | ||
| <summary>Results</summary> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hiding the results is definitively what we want here in order to not make it too noisy. Can we also have a summary or something on top that tells PR authors quickly whether the benchmarks have regressed, improved or no significant change was detected?
Description
Adds a workflow executing the benchmarks in
resolcandrevive-yul.Testworkflow:pull_requesteventmainbranch and the PR branchNotes
File must exist on the default branch
workflow_runevent only if the workflow file exists on the default branch. Therefore, the workflow will not run for this PR. (See section further below on how it has still been tested.)The default branch workflow will be used
Benchmarkworkflow will start without any indications in the PR that the workflow is running.Testing the Workflow
I've tested a similar enough workflow in a personal repo in order to push it to
mainand have it be triggered. Will verify again once this PR is merged.workflow_runis triggered by a PR eventworkflow_runis not triggered by a push eventExample PR Comment
Expand to see an example PR comment
Benchmarks
Compile E2E
Results
Parse Yul
Results
Lower Yul
Results
Resolved Issues
Closes #405