RTL-SDR sample bit depth

This is merely a historical archive of years 2008-2021, before the migration to mailman3.

A maintained and still updated list archive can be found at https://lists.osmocom.org/hyperkitty/list/osmocom-sdr@lists.osmocom.org/.

John Ackermann N8UR jra at febo.com
Sun Mar 4 15:15:05 UTC 2018


As a follow up to this, I've been doing testing with an RTL-SDR.com v3 
dongle and Gnu Radio flowgraph.  I set sample rate to 1.536 msps and the 
flowgraph decimates by 8 to yield 192 ksps, which is displayed on an FFT 
without further processing.  Gain control is set to manual, and I step 
the RF gain between 0 and 50 dB for testing.  I use a precision signal 
generator (HP 8642A calibrated with an HP power meter) as a signal 
source and test at 144 MHz.

For an approximation of the minimum discernable signal (MDS) I adjust 
the signal generator amplitude until I see a noticeable signal that is 
consistently just above the noise.  To find the overload point, I 
increase the amplitude until I see the first spur appear -- it's a very 
sudden transition, with a 1 dB amplitude increase triggering spurs many 
dB above the noise.

Based on the assumptions in my earlier message, I would expect to see a 
dynamic range of about 59 dB (~50 dB from 8 bits at 1.536 Msps, plus 9 
dB processing gain by the decimate-by-8).

However, I'm seeing closer to 100 dB dynamic range -- for example, with 
the RF gain set to 20 dB, the MDS is -124 dBm and the overload level is 
25 dBm.  This tracks for various settings of the RF gain, though there 
seems to be a few dB of compression with gains above 30 dB.

I'm trying to understand this discrepancy, which could be the result of:

1.  Some AGC operation or gain compression in the R820T2 tuner chip;

2.  My assumptions about the internal sample rate bit depth or 
decimation being wrong; or

3.  My math being wrong (for example, is there a log10 vs. log20 error 
in my dB calculations, or is the dB scaling in the FFT showing voltage 
rather than power?).

Any thoughts would be appreciated.

Thanks,
John
----
On 03/02/2018 09:46 AM, John Ackermann N8UR wrote:
> Hi --
> 
> I'm trying to understand the sampling and decimation structure of the 
> RTL-SDR dongle, to work out the effective number of bits after decimation.
> 
>  From Google and looking at the librtlsdr code (which is beyond my 
> programming depth), I think I've figured out the following.  I'd much 
> appreciate verification/correction/amplification.
> 
> 1.  Actual ADC in the RTL-2832U is a single-bit sigma-delta running at 
> some very high rate.
> 
> 2.  This is converted to 28.8 msps at 8 bit depth.
> 
> 3.  28.8 msps is downsampled to the rate requested by the user and sent 
> over the USB bus as 8 bit unsigned IQ pairs.
> 
> Based on that, I *think*:
> 
> a.  Any processing gain in the downsample from 28.8 msps/8 bits within 
> the chip is lost because the wire samples are limited to 8 bits.  The 
> output is 8 bits dynamic range regardless of the sample rate set.
> 
> b.  THEREFORE... for best dynamic range one wants to set the RTL-2832U 
> to the highest sample rate that avoids lost samples, and do further 
> decimation in the host processor, where the added bits aren't lost on 
> the wire.
> 
> I'd appreciate any verification or correction of that analysis.
> 
> Thanks,
> John



More information about the osmocom-sdr mailing list