-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert MARC Export to use Celery (PP-1472) #2017
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #2017 +/- ##
==========================================
+ Coverage 90.59% 90.67% +0.08%
==========================================
Files 338 342 +4
Lines 40135 40502 +367
Branches 8681 8777 +96
==========================================
+ Hits 36360 36726 +366
- Misses 2509 2510 +1
Partials 1266 1266 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is an absolute beast. I like your batch + re-queueing approach. I think that pattern will likely come in handy in other places.
314fde3
to
2439737
Compare
Description
Convert the MARC export script to use Celery.
Motivation and Context
This ended up being rather involved, since the export can take quite a long time. Even getting a large enough chunk of data for a single s3 multipart upload can take longer then I was comfortable with.
This PR takes the approach of processing
batch_size
(default: 500) records in one task, then saving the output to redis and re-queuing the task to process the nextbatch_size
of records. Once the data in redis is large enough, a multipart upload is started in S3, and the multipart data is cached in redis. This continues until the file is completely generated.How Has This Been Tested?
Checklist