It appears that both of my problems are nearly the same: the "worker cond timeout" happens because the first thing my program does is set the sample rate, which takes a whopping 2.58 seconds. This apparently delays rtlsdr_callback long enough that the timeout in pthread_cond_timedwait is hit, causing tcp_worker to bail.
rtlsdr_set_center_freq takes 1.09 seconds and is unrelated to the select call--that comes down immediately. It is something to do with the rtlsdr library itself (or one of its dependencies). Time to investigate more deeply.
-Scott
On 9/26/2012 4:26 AM, Simeon Miteff wrote:
Hi Scott
On 09/26/12 12:35, Scott Cutler wrote:
Slight correction/clarification to the below: obviously, I am not calling rtlsdr_set_center_freq when using rtl_tcp.exe. Instead, what I see is that I can queue up a bunch of tuning calls, but they only kick in at a very slow rate (1 hz). The effect is similar to when I used the native libs, where rtlsdr_set_center_freq also took 1 s on the main thread.
I experienced the same maximum 1 hz tuning rate with rtl_tcp.
A quick look at the code shows a select() call in the command_worker thread with a timeout set to 1s, but that doesn't explain the observed behavior, because, as I understand select(), it should return as soon as the UDP socket has data available (the timeout is used to allow the thread to terminate when rtl_tcp shuts down). Nevertheless, my spidey sense tells me this is the place to look...
Regards, Simeon.