-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Truncation of results in the presence of very large queries/high number of concurrent queries #11
Comments
This most recent came up with the |
I added the Thanks for looking into this Chris! (Cc'ing @emilyburke who found the issue to begin with) |
thanks @lcolladotor! |
Seen this on both Stingray and on the cluster mirror.
Typically only happens when there are queries which return large numbers of rows (10000's) or the rows themselves are extremely large such as coverage queries with 10000's of samples in each row, or there is a high amount of concurrency (100 queries all started at the same time).
A form of the error will be reported by either Python or curl (18) as:
transfer closed with outstanding read data remaining
However, it's unclear whether the server is failing to transfer the full payload or the client is failing to keep up, or both.
The server doesn't always report an error. OTOH it's not clear why the client couldn't keep up when running on a server with large numbers of cores/memory, though there are situations where raising the read buffer on the client has alleviated the problem in specific cases (but doesn't always work).
The text was updated successfully, but these errors were encountered: