-
Notifications
You must be signed in to change notification settings - Fork 895
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize decompression planning #7568
base: main
Are you sure you want to change the base?
Conversation
The EquivalenceMember lookup is the most costly part, so share it between different uses. Switch batch sorted merge to use the generic pathkey matching code. Also cache some intermediate data in the CompressionInfo struct.
@@ -212,7 +212,7 @@ decompress_chunk_begin(CustomScanState *node, EState *estate, int eflags) | |||
node->ss.ss_ScanTupleSlot->tts_tupleDescriptor); | |||
} | |||
} | |||
/* Sort keys should only be present when sorted_merge_append is used */ | |||
/* Sort keys should only be present when batch sorted merge is used. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even without batch sortetd merge we might still want to push down ordering and skip the ordering after decompression.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we do that, but these are the keys for the sorting performed inside the DecompressChunk node itself. The only case when it does that is for batch sorted merge, otherwise the sorting is performed by the underlying compressed scan.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #7568 +/- ##
==========================================
+ Coverage 80.06% 82.30% +2.24%
==========================================
Files 190 238 +48
Lines 37181 43706 +6525
Branches 9450 10963 +1513
==========================================
+ Hits 29770 35974 +6204
- Misses 2997 3402 +405
+ Partials 4414 4330 -84 ☔ View full report in Codecov by Sentry. |
Tsbench has some 10% speedups on planning queries: https://grafana.ops.savannah-dev.timescale.com/d/uP2MnQk4z/query-run-times?orgId=1&var-suite=lazy_decompression&var-query=8e50d2074a29289e8ec3280d4ee535bc This also uncovers a major planning regression which is caused by the notorious quadratic equivalence member search in the Postgres sorted plan creation. I think we'll have to live with it for now and fix this upstream. It happens on queries like Another regression is in The regression in |
I initially didn't want to introduce any plan changes with this PR, but it's split out of #6879, so I had to import one small part from there -- we now can sort above decompression not only by plain columns, but also by expressions (e.g. time_bucket), which gives rise to these (arguably more efficient) MergeAppend over per-chunk Sort plans. |
The EquivalenceMember lookup is the most costly part, so share it between different uses.
Switch batch sorted merge to use the generic pathkey matching code.
Also cache some intermediate data in the CompressionInfo struct.
Disable-check: force-changelog-file