You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the server, core::iter::traits::iterator::Iterator::fold called from neqo_transport::connection::Connection::input_path takes ~15% of cycles. I guess this is related to the SentPackets data structure. We should work on that.
neqo-neqo-reno-pacing.server.svg
For the client, no clear new bottleneck emerges. That's not really surprising, because our server is the bottleneck. The client only uses 50-66% of a core when the server maxes it's core out.
neqo-neqo-reno-pacing.client.svg
Running the neqo client against the msquic server (which makes our client the bottleneck), shows input_path taking quite a bit more time than above. More surprisingly, the graphs appear quite different overall.
neqo-msquic-cubic-nopacing.client.svg
The text was updated successfully, but these errors were encountered:
For the server,
core::iter::traits::iterator::Iterator::fold
called fromneqo_transport::connection::Connection::input_path
takes ~15% of cycles. I guess this is related to theSentPackets
data structure. We should work on that.neqo-neqo-reno-pacing.server.svg
For the client, no clear new bottleneck emerges. That's not really surprising, because our server is the bottleneck. The client only uses 50-66% of a core when the server maxes it's core out.
neqo-neqo-reno-pacing.client.svg
Running the neqo client against the msquic server (which makes our client the bottleneck), shows
input_path
taking quite a bit more time than above. More surprisingly, the graphs appear quite different overall.neqo-msquic-cubic-nopacing.client.svg
The text was updated successfully, but these errors were encountered: