Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

playback list is very slow and not all files shown #3637

Closed
1 of 13 tasks
alex-eri opened this issue Aug 8, 2024 · 14 comments
Closed
1 of 13 tasks

playback list is very slow and not all files shown #3637

alex-eri opened this issue Aug 8, 2024 · 14 comments
Labels
enhancement New feature or request record playback

Comments

@alex-eri
Copy link

alex-eri commented Aug 8, 2024

Which version are you using?

v1.8.3

Which operating system are you using?

  • Linux amd64 standard
  • Linux amd64 Docker
  • Linux arm64 standard
  • Linux arm64 Docker
  • Linux arm7 standard
  • Linux arm7 Docker
  • Linux arm6 standard
  • Linux arm6 Docker
  • Windows amd64 standard
  • Windows amd64 Docker (WSL backend)
  • macOS amd64 standard
  • macOS amd64 Docker
  • Other (please describe)

Describe the issue

http://server:9996/list?path=to_outside.1 request stucks for long time. Then (if browser not timedout) it returns one record.

[{"start":"2024-08-07T11:11:52.62923Z","duration":72479.305043111}]

for this path

./recordings/to_outside.1
├── 20240807
│   ├── 111152_629230.mp4
│   ├── 112652_765854.mp4
│   ├── 114152_914281.mp4
│   ├── 115653_036756.mp4
│   ├── 121153_221278.mp4
│   ├── 122653_411640.mp4
│   ├── 124153_575433.mp4
│   ├── 125653_674950.mp4
│   ├── 131153_834464.mp4
│   ├── 132653_965197.mp4
│   ├── 134154_162221.mp4
│   ├── 135654_387268.mp4
│   ├── 141154_496057.mp4
│   ├── 142654_610316.mp4
│   ├── 144154_768511.mp4
│   ├── 145654_881170.mp4
│   ├── 151155_053120.mp4
│   ├── 152655_205675.mp4
│   ├── 154155_335251.mp4
│   ├── 155655_496740.mp4
│   ├── 161155_660633.mp4
│   ├── 162655_820243.mp4
│   ├── 164155_925789.mp4
│   ├── 165656_101943.mp4
│   ├── 171156_267563.mp4
│   ├── 172656_432281.mp4
│   ├── 174156_536632.mp4
│   ├── 175656_713071.mp4
│   ├── 181156_882640.mp4
│   ├── 182657_098366.mp4
│   ├── 184157_223408.mp4
│   ├── 185657_348706.mp4
│   ├── 191157_516616.mp4
│   ├── 192657_701791.mp4
│   ├── 194157_814351.mp4
│   ├── 195657_981305.mp4
│   ├── 201158_126547.mp4
│   ├── 202658_291990.mp4
│   ├── 204158_405639.mp4
│   ├── 205658_571189.mp4
│   ├── 211158_714285.mp4
│   ├── 212658_901024.mp4
│   ├── 214159_015332.mp4
│   ├── 215659_170284.mp4
│   ├── 221159_347544.mp4
│   ├── 222659_507063.mp4
│   ├── 224159_674498.mp4
│   ├── 225659_791715.mp4
│   ├── 231159_953290.mp4
│   ├── 232700_043492.mp4
│   ├── 234200_225604.mp4
│   └── 235700_436868.mp4
└── 20240808
    ├── 001200_585937.mp4
    ├── 002700_691500.mp4
    ├── 004200_843033.mp4
    ├── 005701_020647.mp4
    ├── 011201_172174.mp4
    ├── 012701_293427.mp4
    ├── 014201_444663.mp4
    ├── 015701_630024.mp4
    ├── 021201_768303.mp4
    ├── 022701_904968.mp4
    ├── 024202_029968.mp4
    ├── 025702_234597.mp4
    ├── 031202_381124.mp4
    ├── 032702_522035.mp4
    ├── 034202_670731.mp4
    ├── 035702_846987.mp4
    ├── 041203_025881.mp4
    ├── 042703_111752.mp4
    ├── 044203_279583.mp4
    ├── 045703_441168.mp4
    ├── 051203_623591.mp4
    ├── 052703_748220.mp4
    ├── 054203_927473.mp4
    ├── 055704_089837.mp4
    ├── 061204_218188.mp4
    ├── 062704_371738.mp4
    ├── 064204_510545.mp4
    ├── 065704_670233.mp4
    ├── 071204_844362.mp4
    └── 072704_968446.mp4

sometimes 3 records of 102 files

[{"start":"2024-08-07T11:11:52.62923Z","duration":80487.961487666},{"start":"2024-08-08T09:33:28.962707Z","duration":9297.306431444},{"start":"2024-08-08T12:08:31.318402Z","duration":431.201}]

last request timing is 7 minutes

Describe how to replicate the issue

My configuration is

playback: yes
playbackTrustedProxies: ['172.16.0.1/12']

pathDefaults:
  recordPath: ./recordings/%path/%Y%m%d/%H%M%S_%f
  rtspTransport: tcp
  recordFormat: fmp4
  recordSegmentDuration: 15m
  recordDeleteAfter: 0s

paths:
  to_inside.1:
    source: 'rtsp://secret:secret@secret/ISAPI/Streaming/Channels/101'
    record: yes
  to_inside.2:
    source: 'rtsp://secret:secret@secret/ISAPI/Streaming/Channels/102'
  to_outside.1:
    source: 'rtsp://secret:secret@secret/ISAPI/Streaming/Channels/101'
    record: yes
  to_outside.2:
    source: 'rtsp://secret:secret@secret/ISAPI/Streaming/Channels/102'

But also had problems with default config.

Disk speed is about 123 MB/s.

Did you attach the server logs?

gate-stack-videoserver-1  | 
gate-stack-videoserver-1  | 2024/08/08 12:08:32 DEB [path to_inside.1] [record] creating segment ./recordings/to_inside.1/20240808/120831_396037.mp4
gate-stack-videoserver-1  | 2024/08/08 12:08:32 DEB [path to_outside.1] [record] creating segment ./recordings/to_outside.1/20240808/120831_318402.mp4
gate-stack-videoserver-1  | 2024/08/08 12:08:34 DEB [playback] [conn 10.10.6.167:50030] [c->s] GET /list?path=to_outside.1 HTTP/1.1
gate-stack-videoserver-1  | Host: cams-msz.esc.ivjh.ru:9996
gate-stack-videoserver-1  | Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
gate-stack-videoserver-1  | Accept-Encoding: gzip, deflate
gate-stack-videoserver-1  | Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7
gate-stack-videoserver-1  | Cache-Control: max-age=0
gate-stack-videoserver-1  | Connection: keep-alive
gate-stack-videoserver-1  | Dnt: 1
gate-stack-videoserver-1  | Upgrade-Insecure-Requests: 1
gate-stack-videoserver-1  | User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/126.0.0.0 Safari/537.36
gate-stack-videoserver-1  | 
....

gate-stack-videoserver-1  | 2024/08/08 12:15:43 DEB [playback] [conn 10.10.6.167:50030] [s->c] HTTP/1.1 200 OK
gate-stack-videoserver-1  | Access-Control-Allow-Credentials: true
gate-stack-videoserver-1  | Access-Control-Allow-Origin: *
gate-stack-videoserver-1  | Content-Type: application/json; charset=utf-8
gate-stack-videoserver-1  | Server: mediamtx
gate-stack-videoserver-1  | 
gate-stack-videoserver-1  | (body of 192 bytes)

Did you attach a network dump?

not usefull

@cooperbrown9
Copy link

bumping bc same. Lots of files in the actual directories, but it files returned don't match. Also they get stuck often, makes me think there's a blocking operation going on. You have any luck with this?

@redbaron-gt
Copy link

Same on version 1.9.2. Not all files are visible.

@aler9
Copy link
Member

aler9 commented Oct 12, 2024

The playback API works with timespans, not files. Adjacent files are grouped into a single timespan. So this is not a bug in any way.

The speed aspect can be improved though.

@aler9 aler9 added enhancement New feature or request record playback labels Oct 12, 2024
@redbaron-gt
Copy link

redbaron-gt commented Oct 12, 2024

@aler9 Thank you for clarification. Have some questions because couldn't find any description of timespans in documentation. I have 24/7 recording from webcam and duration of segments are big, how I can specify to split timespans to 1 hour or how to download 1 hour duration from segment with 4123.987184 duration? I have recordSegmentDuration: 1h in config

@cooperbrown9
Copy link

@redbaron-gt you can specify the start time you want and specify the duration for it. So

http://localhost:9996/get?path=[pathToStream]&start=[start_date]&duration=3600

You dont need to have mediamtx split the timespans for you, you can specify any range in that ^ api call

@cooperbrown9
Copy link

@aler9 how can you speed it up? I record in 60second chunks and have a custom recording path. After about 2 days, it timeouts before it returns. I solved this by making a custom function for this, but would be nice if I could get it working natively. Any idea how to speed it up?

@aler9
Copy link
Member

aler9 commented Oct 12, 2024

My idea is to improve the server in two different ways:

  • add a cache system that stores duration of recording segments in the RAM, preventing the computationally-expensive process of opening every segment and getting its duration, that is causing the delay.

  • add a start date and an end date to the /list endpoint in order to force pagination and therefore loading only a certain day (or week, or month) instead of the entire lifespan of recordings.

I don't have a deadline for these points and external contributions are welcome.

In the meanwhile, in order to decrease the delay, you can use longer segment duration in order to reduce segment count (opening a file on disk causes delay, so just reduce file count) or move segments in a SSD disk.

@cooperbrown9
Copy link

Ohhh I see you actually have to read the duration of the segments... I was thinking you could just calculate the distances between the timestamps of the segments (since you have to have a timestamp field in there).
Pagination would be great... honestly just a way to further query that like where startDate >= [date] would be awesome.
I might take a look later and try to add a way to query /list better (i.e. add query param to only look between start and end dates)

@bfeist
Copy link

bfeist commented Nov 11, 2024

Any update on the caching answer? The list?path=[path] call takes over 30 seconds to respond (pegging the CPU that whole time) for a path that contains 72h of video in 90s clips.

@cooperbrown9
Copy link

cooperbrown9 commented Nov 11, 2024 via email

@bfeist
Copy link

bfeist commented Nov 12, 2024

Thanks for this hint. I had somehow completely missed that I could hit the 9997 API to get a list of recordings with start times. I've written my own parser that with a known expected segment length, uses these start times to produce an output similar to the playback API on 9996.

aler9 added a commit that referenced this issue Jan 1, 2025
Response times of the /list endpoint were slow due to the need of
computing the duration of each segment, that was obtained by summing
the duration of each of its parts.

This is improved by storing the duration of the overall segment in the
header and using that, if available.
aler9 added a commit that referenced this issue Jan 2, 2025
Response times of the /list endpoint were slow due to the need of
computing the duration of each segment, that was obtained by summing
the duration of each of its parts.

This is improved by storing the duration of the overall segment in the
header and using that, if available.
aler9 added a commit that referenced this issue Jan 2, 2025
Response times of the /list endpoint were slow because the duration of
each segment was computed from scratch by summing the duration of each
of its parts.

This is improved by storing the duration of the overall segment in the
header and using that, if available.
@aler9
Copy link
Member

aler9 commented Jan 2, 2025

Summing up:

aler9 added a commit that referenced this issue Jan 2, 2025
Response times of the /list endpoint were slow because the duration of
each segment was computed from scratch by summing the duration of each
of its parts.

This is improved by storing the duration of the overall segment in the
header and using that, if available.
aler9 added a commit that referenced this issue Jan 2, 2025
Response times of the /list endpoint were slow because the duration of
each segment was computed from scratch by summing the duration of each
of its parts.

This is improved by storing the duration of the overall segment in the
header and using that, if available.
aler9 added a commit that referenced this issue Jan 3, 2025
Response times of the /list endpoint were slow because the duration of
each segment was computed from scratch by summing the duration of each
of its parts.

This is improved by storing the duration of the overall segment in the
header and using that, if available.
aler9 added a commit that referenced this issue Jan 3, 2025
Response times of the /list endpoint were slow because the duration of
each segment was computed from scratch by summing the duration of each
of its parts.

This is improved by storing the duration of the overall segment in the
header and using that, if available.
aler9 added a commit that referenced this issue Jan 3, 2025
Response times of the /list endpoint were slow because the duration of
each segment was computed from scratch by summing the duration of each
of its parts.

This is improved by storing the duration of the overall segment in the
header and using that, if available.
aler9 added a commit that referenced this issue Jan 3, 2025
Response times of the /list endpoint were slow because the duration of
each segment was computed from scratch by summing the duration of each
of its parts.

This is improved by storing the duration of the overall segment in the
header and using that, if available.
@aler9
Copy link
Member

aler9 commented Jan 3, 2025

This is fixed by #4085, #4096 and #4102. If the /list endpoint is still slow, first wait for new segments being generated (since the performance improvement applies to new segments only) and if the problem persists open a new bug report.

@aler9 aler9 closed this as completed Jan 3, 2025
Copy link
Contributor

github-actions bot commented Jan 3, 2025

This issue is mentioned in release v1.11.0 🚀
Check out the entire changelog by clicking here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request record playback
Projects
None yet
Development

No branches or pull requests

5 participants