-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[testing, CI] fix coverage statistics issue caused by test_common.py
tracer patching
#2237
base: main
Are you sure you want to change the base?
Conversation
This reverts commit 5c602da.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Flags with carried forward coverage won't be shown. Click here to find out more. |
/intelci: run |
test_common.py
tracer patchingtest_common.py
tracer patching
test_common.py
tracer patchingtest_common.py
tracer patching
/intelci: run |
Is 1.2% increase what you would expect? In general, where does the other missing coverage come from? |
@ethanglaser good question! Its mainly 2 aspects: 1) The utils/validation.py tests coverage was not being recorded, 2) centralized testing (especially test_patching.py) was not being recorded, which covers all other methods of estimators (especially |
Description
Python code coverage has been introduced in #2222 and #2225. Some code, while properly covered with tests, was showing incorrect coverage. It was discovered that the tracing used in
test_common.py
interferes with the tracing collected by pytest-cov. Any tests alphabetically aftertest_common.py
have been neglected in codecov and by coverage.py. Correcting this can be done by tracing the estimators in a separate process and passing the data back to the pytest-cov parent process.This has been implemented using python multiprocessing due to the time overhead of loading sklearnex for each estimator and method. The process persists for the period of all
test_common.py
tests as a daemon process. The test passing/xfail/etc rate has been verified to match to main.No performance metrics are generated due to purely for testing and CI.
The effectiveness of this change can be observed by the ~1.2% improvement in code coverage (as indirect changes).
PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed.
This approach ensures that reviewers don't spend extra time asking for regular requirements.
You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way.
For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).
Checklist to comply with before moving PR from draft:
PR completeness and readability
Testing
Performance