Hi there!
I'm using a dongle on a raspberry + rtl_tcp and sdrsharp on another
machine (quad core.... 3gb ram...good machine!)
The problem occour at 00:57
http://www.youtube.com/watch?v=4E2MPfEzEi8
and at 1:34 in this video:
http://www.youtube.com/watch?v=8snz1wQSRpw
If i stop and start sdrsharp, it works ok for some minutes...then the
problem is up again!
No errors appear on the console of raspberry/rtl_tcp
I'm not able to understand if it's a problem of rtl_tcp,raspberry or
whatelse...
Things i've tryed:
1) Changed the samplerate
2) Changed the raspberry (tested tp1/tp2 too...getting 4.98v!!!)
3) Changed the dongle
4) Updated sdrsharp
The dongles tryed work ok on a pc
I've no more idea....
Anyone can help me?
PS: If someone know a program that works in linux and is similar to
sdrsharp AND CAN INTERFACE TO A REMOTE RTL_TCP...
Hi guys,
I'm new to this list (and to radio) so I hope you will please indulge me
if I ask something that is a FAQ. Also, some of these questions are
about the dongle, not the library.
I am working on a personal project to use SDR techniques to decode
aviation navigation signals (VOR). I've got the signal processing mostly
working from recorded signals, but am now trying to integrate my SW with
the radio in real time.
I have a few questions:
- What exactly is offset tuning? How is offset tuning different from
tuning to an offset?
Is this a feature that mostly benefits people who are not going to put
their IF through another mixer? In my application I am already tuning to
an offset, and pulling down a wide enough IF that actually holds many
channels of interest. (VOR channels have 50kHz spacing). I then use a
software mixer/channelizer to choose the channel I want. Am I correct in
assuming that offset tuning is of no use to me?
- regarding AGC, what is the difference between AGC and auto gain?
That is, the library API has
RTLSDR_API int rtlsdr_set_tuner_gain_mode(rtlsdr_dev_t *dev, int manual)
RTLSDR_API int rtlsdr_set_agc_mode(rtlsdr_dev_t *dev, int on);
I'm guessing that these affect different AGCs. One for the tuner and one
for the RTL device.
What are the benefits and costs of having either or both on?
- regarding rtlsdr_read_async(...) and related functions.
I take it that the library is setting up a ring buffer and calling me
back when it has a new buffer of data for me.
How long to I have to work with this buffer? Obviously, if I want to
work in real-time I need to keep up with the sample rate. But my
application can afford to throw away buffers since it can decode a few
ms of data from one station and then revisit it much later. However, I'd
like to know how long I have until the buffer gets clobbered. I'm
presuming it's stable until all the n-1 other buffers have been hit.
- generally how fast can the RTL devices tune? I know, this is not an
rtlsdr question per se, but I'm curious. I noticed that when I tune, I
get a delay.
This is a great library and I'm so glad it's out there! I was not
looking forward to plumbing the depths of USB drivers to understand how
to pull data from the RTL dongle! I think rtl-sdr.h could use perhaps a
smidge more documentation. I'd be happy to submit a comments-only patch
if there's interest. :-)
Regards,
Dave Jacobowitz
Hi everyone,
although there are some comparisons between the R820T and the E4000
already [1, 2], I also did some tests with another use-case in mind.
I'm working on a thing similar to RTLSDR-Scanner [3]. I want to
monitor a large part of the spectrum continuously.
So I compared the R820T with the E4000 using RTLSDR-Scanner w/ and w/o
an antenna.
My results are here:
https://docs.google.com/folder/d/0ByDAKwyEiyx_XzZ5ZnpRV1VZWDQ/edit?usp=shar…
There's much more spurs with the E4000 than w/ the R820T. According to
[1, 2] one also would expect a better overall sensivity compared to
the E4000.
However, the GSM900 signals for example seem to be way better with the
E4000 according to the RTLSDR-Scanner. Tuning to a certain channel w/
OsmoSDR Source in GNUradio gives about the same SNR - contrary to the
RTLSDR-Scanner output. Can anyone explain this?
Also, the DVB-T channel at 502MHz is quite weak in the R820T
RTLSDR-Scanner output when compared to the E4000. I had a closer look
at the lower limit of the channel in gnuradio. This can be seen in the
502MHz_*.png pictures. The E4000 produces a nice +20dB step while one
can hardly see the channel in the R820T spectrum. I don't understand
this as well. Is this AGC-related? Manually setting a fixed gain
didn't really help though...
Any explanations?
Thank you!
Best regards,
Hunz
[1] http://steve-m.de/projects/rtl-sdr/tuner_comparison/
[2] http://www.hamradioscience.com/rtl2832u-r820t-vs-rtl2832u-e4000/#more-1852
[3] https://github.com/EarToEarOak/RTLSDR-Scanner
This is with a version of rtl-sdr I got by git last night and OpenBSD 5.2 (current release). 5.2 has some pthreads fixing so I waited until I bought another computer and loaded it. Are the crashes related to threads? I don't know, but possibly. It didn't work with OpenBSD 5.0 either.
rtl_fm crashes and uses threads
rtl_adsb crashes and uses threads
rtl_tcp doesn't crash, uses threads, actually stops on ctrl-c
rtl_test doesn't crash, doesn't use threads, won't stop
rtl_eeprom doesn't crash, doesn't use threads, ends normally
I'm not real practiced using gdb but I tried looking at a couple of core files, here's a run of rtl_fm:
rtl_fm -f 162550000 -N - | play -t raw -r 24k -e signed-integer -b 16 -c 1
-V1 -
-: (raw)
Encoding: Signed PCM
Channels: 1 @ 16-bit
Samplerate: 24000Hz
Replaygain: off
Duration: unknown
In:0.00% 00:00:00.00 [00:00:00.00] Out:0 [ | ] Clip:0
Found 1 device(s):
0: Realtek, RTL2838UHIDIR, SN: 00000013
Using device 0: ezcap USB 2.0 DVB-T/DAB/FM dongle
Found Rafael Micro R820T tuner
Oversampling input by: 42x.
Oversampling output by: 1x.
Buffer size: 8.13ms
Tuned to 162802000 Hz.
Sampling at 1008000 Hz.
Output at 24000 Hz.
Exact sample rate is: 1008000.009613 Hz
Tuner gain set to automatic.
In:0.00% 00:00:00.00 [00:00:00.00] Out:0 [ | ] Clip:0
Done.
Abort (core dumped)
d530# gdb -c rtl_fm.core /usr/local/bin/rtl_fm
GNU gdb 6.3
Copyright 2004 Free Software Foundation, Inc.
GDB is free software, covered by the GNU General Public License, and you are
welcome to change it and/or distribute copies of it under certain
conditions.
Type "show copying" to see the conditions.
There is absolutely no warranty for GDB. Type "show warranty" for details.
This GDB was configured as "i386-unknown-openbsd5.2"...
Core was generated by `rtl_fm'.
Program terminated with signal 6, Aborted.
Reading symbols from /usr/lib/libpthread.so.16.0...done.
Loaded symbols for /usr/lib/libpthread.so.16.0
Reading symbols from /usr/lib/libm.so.7.0...done.
Loaded symbols for /usr/lib/libm.so.7.0
Reading symbols from /usr/local/lib/librtlsdr.so.0.0...done.
Loaded symbols for /usr/local/lib/librtlsdr.so.0.0
Reading symbols from /usr/local/lib/libusb-1.0.so.1.0...done.
Loaded symbols for /usr/local/lib/libusb-1.0.so.1.0
Symbols already loaded for /usr/lib/libpthread.so.16.0
Reading symbols from /usr/lib/libc.so.65.0...done.
Loaded symbols for /usr/lib/libc.so.65.0
Loaded symbols for /usr/libexec/ld.so
#0 0x0abbd98d in kill () from /usr/lib/libc.so.65.0
(gdb) bt
#0 0x0abbd98d in kill () from /usr/lib/libc.so.65.0
#1 0x0ac29545 in abort () at /usr/src/lib/libc/stdlib/abort.c:68
#2 0x005e9298 in pthread_mutex_unlock (mutexp=0x3c003d8c)
at /usr/src/lib/librthread/rthread_sync.c:218
#3 0x1c00266e in full_demod (fm=0xcfa2de5c)
at /usr/src/misc/osmocom/2013-04-15/rtl-sdr/src/rtl_fm.c:583
#4 0x1c0028ff in demod_thread_fn (arg=0xcfa2de5c)
at /usr/src/misc/osmocom/2013-04-15/rtl-sdr/src/rtl_fm.c:641
#5 0x005ebc2e in _rthread_start (v=0x84da4c00)
at /usr/src/lib/librthread/rthread.c:111
#6 0x0aba62e9 in __tfork_thread () from /usr/lib/libc.so.65.0
The backtrace (bt) shows that it dies trying to do a mutex_unlock (I think). rtl_tcp also does a mutex_unlock and it doesn't crash. I'm probably reading it wrong for all I know. I don't know what's causing the signal 6 either.
I'd also like to get the -lrt of of the cmake files. OpenBSD doesn't use or have lrt, it works without. I can edit it out and compile, but every time I run cmake again, I have to edit the files again.
Alan
-----
Radio Astronomy - the ultimate DX
Some of you are probably using multiple dongles with alternating
applications as well.
Regarding the frequency correction, I put a sticker on each dongle and
wrote the ppm value on it.
However manually setting the value in each application is still annoying.
So I used rtl_eeprom to write the value into the product-ID with the
following format: "RTL%+dppm" resulting in sth. like this: RTL+87ppm
I'm not sure if this is the best way to do it, but this approach works
without changing librtlsdr.
Another option might be storing the value in a non-used eeprom
location and adding get/set functions to librtlsdr.
Is it possible to write arbitrary values to unused eeprom locations
without fucking up the realtek eeprom handling? Do you think this
would be a better way?
I'd like to hear your comments on this.
It would be awesome if we could find a solution that many existing
applications could incorporate.
(Of course the ppm value still changes with temperature, but having a
base value is still closer to the truth than 0ppm I'd say.
I measured the offset directly after grabbing samples for 20 minutes
at room-temperature and jitter is <1ppm.)
Best regards,
Hunz
Hi,
I modified my copy of rtl-sdr to be able to store frequency calibration
on the sticks themselves. Maybe this change is generally useful?
Thanks,
andreas
Andreas Seltenreich (2):
rtl_eeprom: Add -x to store exact xtal frequency in Hz.
lib: Try reading xtal calibration from EEPROM.
src/librtlsdr.c | 21 +++++++++++++++++++--
src/rtl_eeprom.c | 17 ++++++++++++++++-
2 files changed, 35 insertions(+), 3 deletions(-)
--
1.7.10.4
Not sure if anyone else had worked on this, but I couldn't find
anything. I updated gr-osmosdr to work with GNU Radio 3.7 (currently
the next branch). You can find it here:
https://github.com/trondeau/gr-osmosdr
I didn't redo the code to be in the 3.7 API style, just made sure that
it could build off gnuradio and run properly.
Tom
Hi all,
I recently got a Raspberry Pi and an RTL2832 USB card and have been
successful in compiling rtl-sdr and running the associated examples, piping
the data up into LabVIEW for analysis with their Demodulation Toolkit. I'm
a little weak on the signal processing math knowledge here, so I was
wondering if someone could explain the math behind the offset tuning code
in rtl_fm's optimal_settings() function.
It looks like the function takes the desired center tuning frequency and
calculates an integer decimation ratio based on a "capture rate" near 1 MHz
and the desired final sampling rate. Then the center frequency is offset by
a quarter of the capture rate, and this and the capture rate are used to
program the radio. Then in the receive callback, the I/Q data is shifted 90
degrees in rotate_90(). At this point, the I/Q data represents (I presume!)
the spectrum around the original desired center frequency and at the
original sample rate.
How do I get, mathematically speaking, from the oversampled I/Q data
centered at the offset frequency to *my* desired center frequency and
sample rate? I'm especially curious as to the effect of the frequency
offset using that particular value (capture rate/4) and what role the
rotation plays in the transformation.
Thanks,
Aaron