You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Related to #2039
If we assume that Phase 2 saturates the CPU, then there is no speed-up necessary beyond the existing mechanism of firing a speculative retrieval request for blocks.
Phase 1 can be sped up differently for different versions:
For V1, we can split the metadata into N macro-segments, and fire requests for one micro-segment to each N peer. Eventually, the responses should sync up to either history, or some other segment. Effectively, splitting up the entire history into N macro-segments and giving each one the same Phase 1 treatment. Speeds things up by exploiting more concurrency, while maintaining payload sizes. Requires at least N peers before starting.
For V2, we retrieve a larger segment e.g. N * batch_size; and then split the retrieved metadata into N segments (with the same peer) into the metadata. Effectively requesting a much larger macro-segment at a time, and then splitting it into several micro-segments. Speeds things up by retrieving more metadata per request.
The text was updated successfully, but these errors were encountered:
Related to #2039
If we assume that Phase 2 saturates the CPU, then there is no speed-up necessary beyond the existing mechanism of firing a speculative retrieval request for blocks.
Phase 1 can be sped up differently for different versions:
The text was updated successfully, but these errors were encountered: