-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
playback list is very slow and not all files shown #3637
Comments
bumping bc same. Lots of files in the actual directories, but it files returned don't match. Also they get stuck often, makes me think there's a blocking operation going on. You have any luck with this? |
Same on version 1.9.2. Not all files are visible. |
The playback API works with timespans, not files. Adjacent files are grouped into a single timespan. So this is not a bug in any way. The speed aspect can be improved though. |
@aler9 Thank you for clarification. Have some questions because couldn't find any description of timespans in documentation. I have 24/7 recording from webcam and duration of segments are big, how I can specify to split timespans to 1 hour or how to download 1 hour duration from segment with 4123.987184 duration? I have recordSegmentDuration: 1h in config |
@redbaron-gt you can specify the start time you want and specify the duration for it. So http://localhost:9996/get?path=[pathToStream]&start=[start_date]&duration=3600 You dont need to have mediamtx split the timespans for you, you can specify any range in that ^ api call |
@aler9 how can you speed it up? I record in 60second chunks and have a custom recording path. After about 2 days, it timeouts before it returns. I solved this by making a custom function for this, but would be nice if I could get it working natively. Any idea how to speed it up? |
My idea is to improve the server in two different ways:
I don't have a deadline for these points and external contributions are welcome. In the meanwhile, in order to decrease the delay, you can use longer segment duration in order to reduce segment count (opening a file on disk causes delay, so just reduce file count) or move segments in a SSD disk. |
Ohhh I see you actually have to read the duration of the segments... I was thinking you could just calculate the distances between the timestamps of the segments (since you have to have a timestamp field in there). |
Any update on the caching answer? The |
I ended up making a custom function that traverses the recordings folder.
Instead of calculating the duration of each using ffmpeg, I just calculate
it using the datestamp and assume it's actually as long as the timestamp
diff says it is.
I am looking into adding a PR for this function, but it's technically not
"safe" since a segment *could *be not the duration that the timestamp says
it is. So no... no real update at the moment
…On Mon, Nov 11, 2024 at 3:40 PM Ben Feist ***@***.***> wrote:
Any update on the caching answer? The list?path=[path] call takes over 30
seconds to respond (pegging the CPU that whole time) for a path that
contains 72h of video in 90s clips.
—
Reply to this email directly, view it on GitHub
<#3637 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AC6QR76EKEKBK52XSBYUNFD2AEI5RAVCNFSM6AAAAABMGMDWCKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRYHE4DKOBXGE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Thanks for this hint. I had somehow completely missed that I could hit the 9997 API to get a list of recordings with start times. I've written my own parser that with a known expected segment length, uses these start times to produce an output similar to the playback API on 9996. |
Response times of the /list endpoint were slow due to the need of computing the duration of each segment, that was obtained by summing the duration of each of its parts. This is improved by storing the duration of the overall segment in the header and using that, if available.
Response times of the /list endpoint were slow due to the need of computing the duration of each segment, that was obtained by summing the duration of each of its parts. This is improved by storing the duration of the overall segment in the header and using that, if available.
Response times of the /list endpoint were slow because the duration of each segment was computed from scratch by summing the duration of each of its parts. This is improved by storing the duration of the overall segment in the header and using that, if available.
Summing up:
|
Response times of the /list endpoint were slow because the duration of each segment was computed from scratch by summing the duration of each of its parts. This is improved by storing the duration of the overall segment in the header and using that, if available.
Response times of the /list endpoint were slow because the duration of each segment was computed from scratch by summing the duration of each of its parts. This is improved by storing the duration of the overall segment in the header and using that, if available.
Response times of the /list endpoint were slow because the duration of each segment was computed from scratch by summing the duration of each of its parts. This is improved by storing the duration of the overall segment in the header and using that, if available.
Response times of the /list endpoint were slow because the duration of each segment was computed from scratch by summing the duration of each of its parts. This is improved by storing the duration of the overall segment in the header and using that, if available.
Response times of the /list endpoint were slow because the duration of each segment was computed from scratch by summing the duration of each of its parts. This is improved by storing the duration of the overall segment in the header and using that, if available.
Response times of the /list endpoint were slow because the duration of each segment was computed from scratch by summing the duration of each of its parts. This is improved by storing the duration of the overall segment in the header and using that, if available.
This issue is mentioned in release v1.11.0 🚀 |
Which version are you using?
v1.8.3
Which operating system are you using?
Describe the issue
http://server:9996/list?path=to_outside.1 request stucks for long time. Then (if browser not timedout) it returns one record.
for this path
sometimes 3 records of 102 files
last request timing is 7 minutes
Describe how to replicate the issue
My configuration is
But also had problems with default config.
Disk speed is about 123 MB/s.
Did you attach the server logs?
Did you attach a network dump?
not usefull
The text was updated successfully, but these errors were encountered: