-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Looking to optimize precision of 'frame.off_start' #59
Comments
While it would be possible to get sub-sample accuracy to align each frame, it is not really useful the case at hand. The absolute position information of frames is set by the sample-clock (either sample-count in a file or the tick from the soundcard). That is an integer count at the given sample-rate and the only valid time-domain in this case. I suggest to use a DLL if you need higher precision or to filter time to align different clock-domains. To answer your other question. The start-offset is calculated relative to the end of previous frames-end
|
Would you mind putting labels on the axis on your graph? What exactly am I looking at? |
Sorry, I should have been clearer/more detail in my post.... The question arises from the work I been doing here My objective would be to reduce the variance/jitter in the reported 'off_start', which I use to calculate the 'received timestamp' which is sent to Chrony. The plot is shows data from Chrony's 'refclock.log' file, Y-Axis is 'Cooked Time' vs sample number (effectively Time). 'LTC' is the name of the RefClock input, as is being tracked by Chrony as the data is logged - so should be no drift in value. For example:
|
One thought I had in the early hours, was that this question is specific to a continuous LTC stream which should be of consistent speed/frequency. Perhaps LTCLib can have a mode where it knows speed is constant and tries hard to reduce jitter on timing measurements. My hardware (DigiDesign SyncIO) does not set the 'date' bit within the stream, but others may and this may be a way to enable behaviour
|
This is not in the scope of the library. The purpose of libltc is en/decoding of the biphase encoded signal. The library itself does not concern itself with timing per se. |
I have been digging into some test data, one thing I note is that the reported volumes for packets are change quite a bit (ranging -0.454496 to -3.171338 in the course of a minute), which for a 'constant setup' should not really be true. Looking at a previous recording of 30fps (48KHz) output of my SyncIO I see that the levels are mostly constant and (as per SMPTE spec) the transitions are slew'ed appropriately. Looking at the code for the decoder: It does notable things: Hysteresis is good but delay's reference point/sample. Because of the (deliberate) slew in the waveform the trigger timing will be affected by the chosen trigger thresholds. Given my observation on changing volume, could it be that this is causing 'wobble' in the timing??? I am away from test hardware at the moment, but I would suggest that min/max samples are assessed on a rolling average so that single glitch does not disrupt the decoding and that the decay be made a longer time constant (perhaps frequency dependent). It may also be worthwhile to ensure that the sample is AC balanced. |
Manually interpreting a band-limited signal by looking at individual spaced samples is not very useful in general. I don't know how Chrony comes into play but it seems you have two different time-domains (as opposed to just one like during A/V postprod). What is your use-case and how is the current precision not sufficient? |
Ack that LibLTC probably meets most application uses, to align video frame you need to be +-~15ms. My interest is a DIY 'Tenticle Sync'; a box which can align to LTC stream and then maintain frequency reference when disconnected. I set myself target of < +-100us whilst connected to reference, and drift of < +-15ms for 8hr after disconnect. Which would require in the order of 1ppm clock accuracy. [Plus I am nerd'ing out over stuff that I find interesting.] I am not suggesting that you change code specifically for my particular stream/needs; but it does seems that there's something wrong in the code when the reported volume ( I also understand concerns/fears over changing code and having it break somebody else's application.... |
Since it's a serf-clocking signal, the accuracy is ~1/2kHz. With a 2nd order PLL to track the phase it can be even more accurate. If you use 2 different clock domains you have to do that anyway. We've tested synchronizing with Ardour, using a sound-card that is not word-clock synced, and recovering the clock using a DLL in software, then re-generating the signal and comparing using an analog scope. The accuracy is round 25 usec: (yellow is the original analog signal, blue the re-generated one with the Gibbs effect being visible). Long term jitter measurements show a difference smaller than 2 audio samples. |
I tried multiple hardware combinations, nothing shows improvement. Your comment about clock domains 'hit a nerve' so now my SyncIO is driving LTC and 48K super-clock to a DigiDesign 888, which digitizes to SPDIF fed to a USB sound card. Again the "Refclocks" plot shows multiple (6ish) bands of points which are 1 sample clock apart.... Taking a step back I wanted to confirm that this is not a artifact of some code I added. The (temporary) patch below grabs each audio block from Jack and for each timecode packet sent the NTP/Chrony are dumped to file. This dumps files with 'timecode' and 'off_start' encoded in the filename. The zero length ones are because 'off_start' is larger than 'off_end' meaning that the original sample block has be replaced... so not correct to plot.
These can be plotted, I use gnuplot as follows:
This plot clearly confirms that the waveforms from SyncIO are be interpreted slightly offset, the zero on X axis is/should be the start of the first bit. The value of 'off_start' jitters around by a few samples.... don't yet have an explanation :-( You might also notice that I inverted one of those, it seems that SyncIO does not drive the "Polarity correction bit". Don't know if that affects anything. |
Improved the debug tools, to plot as individual 'strips' as this is easier to read. Usage
And this gives plots like, where the "off_start" for "03-17-49-49" is several samples too early. Again I am seeing ~7 sample spread with LTC from my SyncIO. This is WITHOUT the changes made in patch for Bug 17.
|
I have been working with the sister project LTC-tools to feed an externally generated LTC signal into NPT/Chrony, with pretty good success.... Here's some 30NDF analysed by Chrony
What is clear in the image is the horizontal banding, with the bands being ~20us apart (this is the 48KHz sample rate). It would seem that perhaps the integer nature of 'frame.off_start' is causing some jitter in the reported timing. Even though the timing is precise enough to record the variance, I am getting a swing of ~10 samples in detected 'off_start'.
I have looked at 'decoder.c' and it's not really clear to me how it 'finds' the start of the frame.
Dumping the biphase info I get:
The last line is 'off_start', 'off_end' and the sum of the biphase bit timing. Typically I see all '20.0000', some frames show different values.
Does anyone have suggestions on how this might be improved (less variance), or where to look at the code?
The text was updated successfully, but these errors were encountered: