Calibrating frequency of SDR radios


Dirk
 

I have some SDRs and always do a frequency calibration once with GPSDO for getting a PPM deviation value.
Some devices can be calibrated directly, for others I can use the PPM deviation in the used SDR player program (as Shift value or so).

Now with more devices in my "SDR fleet", that want to be calibrated, I found a general problem (?) with frequency calibration:
If I use different SDR players (SDR Console, SDR#, HDSDR ...) for the calibration of a single SDR, I get different results.
Perhaps I do the 1st calibration with SDR#, I am sure in the end to have a PPM deviation of lets say 0.12 (done correctly, that means warming up, waiting for a steady-state, using the same mode (CW) and bandwidth ...).
Then I want to control the result with lets say HDSDR, then I get a clearly different PPM deviation (e.g. 0.29), though I do it directly after the 1st calibration and without interrupting the hardware setting. And possibly I would get a 3rd value from the next SDR player.

Am I doing/expecting sth. wrong?
I thought before, ONE PPM deviation value would be enough for each SDR. But now: do I need PPM values for each SDR and also for each SDR player I use with this SDR?


jdow
 

The ideal answer is each front end requires one PPM value that each SDR software must explicitly load into the front end each time it is run. This PPM value should then be good at all frequencies the device handles.

I don't have time to unwind all the ways this can go wrong. Here are some of them.

Dropped USB packets are a big reason. (Recent Windows 10 updates seem to have slightly borked USB so now I get a modest amount of dropped packets at 128 Msps.)

Multiple reference oscillators within the system make calibration a pain or a sad compromise. (e.g. IC756 Pro-II with a separate clock oscillator for the DSP portion of the receiver.)

Frequency Synthesizer step size can make ideal calibration impossible. I think I am seeing that on the RX888 MKII here. Front ends MUST learn to report their actual sample rate and actual tuned frequency in their synthesizer(s). A PPM change of 0.01 ppm with a synthesizer with a step size of 0.158 ppm (to pick a number at random) will not produce expected results. The SDR can work this itself. But for calibration the user will have to work a little. The SDR needs be able to adjust its internal notion of the real rather than fictional official sample rate and know the PPB value for this correction. This correction is them applied to the frequency settings inside the system. Then an offset value may be needed due to the front end synthesizer. This requires thinking to make it feasible for a user to make it all ridiculously accurate. I suspect "good enough" is what will be the usual results. Computer value properties rather obviate the concept of "1 PPB" correction when a typical "float' is only good to maybe 100 PPB. And switching over to the perhaps 1/10^14-ish :"double" would kill the efficiency of the receiver's DSP work. Expect all this work to be done only if some SDR developer suddenly takes a mental hit and decides 1 PPB or better tuning accuracy and precision would be a fun challenge more fun than all the other tantalizing challenges still to explore.

TANSTAAFL

{o.o}


On 20220924 06:42:48, Dirk wrote:

Am I doing/expecting sth. wrong?
I thought before, ONE PPM deviation value would be enough for each SDR. But now: do I need PPM values for each SDR and also for each SDR player I use with this SDR?


Dirk
 

Thanks for explaining!
Dropped USB packets should not make the difference between SDR players when I do the calibration on the same PC and on the same OS. Correct?


jdow
 

Some SDRs may load the system differently than others. And it's REALLY difficult to sort that out. SDRs are not really well suited to multi-threading, a usual technique to offload processors. SDRC with it's multiple radios working off one front end leverages this to some extent. But, really, a simple SDR is really a straight through process with maybe two meaningful threads, input through FFT to display and input through DSP to audio. Moving data from one CPU thread to another involves moving contents of caches from one processor's cache to another is time very expensive so multiple threads in an SDR tend to all process one after another sequentially on one thread. I guess I am saying program architecture can make a big difference.

In my case I bet the fact that I have a LOT going on at the same time on this machine. (I have an SDRSharp running continuously listening to KUSC for background music. That would be www.kusc.org. I like my music as full meals rather than short 45 RPM record sides. {^_-}) And I keep a poo-pot load of browser tabs and windows open. And often they eat a chunk of processor, too. I tell myself this is stupid. But  - it's just me.)

{^_^}

On 20220924 23:46:26, Dirk wrote:

Thanks for explaining!
Dropped USB packets should not make the difference between SDR players when I do the calibration on the same PC and on the same OS. Correct?


la1rq@...
 

Hi, for those of us having coverage from T-DAB (and evt DVB-T) have an excellent source of frequency standard. 

The sdr-software qirx has solved automatic calibration by locking to the center carrier of T-DAB.

It should be possible to implement in SDRC as well? 


73 de
Espen
LA1RQ


Simon Brown
 

Hi,

DAB carriers are spaced 1kHz apart. 

From: main@SDR-Radio.groups.io <main@SDR-Radio.groups.io> on behalf of la1rq via groups.io <la1rq@...>
Sent: 26 September 2022 07:07
To: main@SDR-Radio.groups.io <main@SDR-Radio.groups.io>
Subject: Re: [SDR-Radio] Calibrating frequency of SDR radios
 

Hi, for those of us having coverage from T-DAB (and evt DVB-T) have an excellent source of frequency standard. 

The sdr-software qirx has solved automatic calibration by locking to the center carrier of T-DAB.

It should be possible to implement in SDRC as well? 


73 de
Espen
LA1RQ


--
- + - + -
Please use https://forum.sdr-radio.com:4499/ when posting questions or problems.