Logging calls, attempting audio, and burning a CPU
(This is the fourth post in a series. You might want to start from the beginning for context.)
Part four: Call logging, attempting audio, and burning a CPU.
The next thing I thought about was how to store all of these calls. It looked like a job for a database. MySQL was already readily available, so I just came up with a quick schema and created the database and tables. There are calls, frequencies, and talkgroups. It's simple.
Then I started down the wrong road and started thinking about RPC mechanisms. If the web server is in location A and the radio is in location B, then I'll need some way to sling calls between them, right?
Never mind the fact that this thing wasn't doing anything more than saying "hey, look, someone is talking right now". It wasn't logging that to the database, and it sure wasn't grabbing call audio. Somehow, architecture astronaut madness had grabbed me. This lasted a day or two and then I realized what was going on and stopped cold.
I started back on the actual call recording stuff by setting some well-defined goals for that evening. Goal #1 was to build something which would do "edge detection" on calls. Here's the thing about SmartNet data: it only tells you about when they're talking. When they stop, the messages dry up. You have to notice that and deal with it intelligently. That was no big deal and I got that working quickly.
Goal #2 for that evening was to get it logging call data to a flat file. I got that going as well and actually decided to have it emit INSERTs which I could then just paste into the MySQL console. That would give me a chance to play with Real Data without having to wire up an actual MySQL client.
I should explain something else here -- at this point, all of the code was Python. I am not a fan of Python, mostly because of how I've seen it abused by its acolytes. Since all of the GNU Radio and gr-smartnet stuff was Python, I had no choice in the matter, and this quickly-growing program was also Python.
At any rate, I got that going too, and soon I had real data in my table. There was no audio to go with it, but you could see how it would work.
Goals #1 and #2 were done, so I tried to go for a "stretch" goal to finish my evening. #3 was "get it logging to wav files". I wasn't really looking forward to doing battle with the Python code again, but I had no choice.
gr-smartnet comes with something which records audio, or at least, it would if I had an old-style USRP daughterboard. I managed to port enough of it to the UHD interface to where it would start creating a single WAV file for each talkgroup, along with a TXT file mapping real world time to offsets in that file. It was far from the "one file per call" thing I wanted, but it was progress.
There was a problem, though: the WAVs were empty. I figured maybe I was doing something stupid with tuning and filtering in my attempt to port to UHD. The interface changed considerably, and for someone who had no context with the system as a whole, it was a serious pile of mud. There were entire parameters like "decim" which disappeared when moving to the new interface. I understand how it works now, but at the time it was just adding to the pile of confusion.
I eventually figured out what happened with the whole "decim" thing. Before, you'd pass in a number like "18" and it would just do the math for you. At some point, this changed to where the client program had to divide for itself and pass in the result. That means I'd get to do the "64000000/18" and pass in the ~3.5 million result. 18 was the recommended decim value from gr-smartnet, and I tried to use it.
I say "tried", because what happened next was not pretty. My poor little machine flat out could not handle the load. Not only was it loading the box noticeably, but it was actually falling behind and started dropping data!
I kept pushing it, trying to figure it out. Then this happened:
Yep, Linux's ACPI code actually works! I was booted out of my own machine as it shut down to save itself from overheating. Now it was becoming painfully obvious: this machine just did not have enough horsepower to keep up with this kind of bandwidth. I was not willing to try running its fans full-tilt, either. It was noisy enough already.
I did some research to figure out what was going on, and why it was loading the CPU so badly. We were talking about nearly 4 million samples per second here, so with 16 bits for each of the I and Q channels for each sample, that's about 128 Mbps of data streaming in via USB. Not only do you have to deal with receiving that data, but you have to crunch it down to something useful, and this machine would never be able to do that.
This was a major setback, but I still had things to try.
Next: part five: eMachines, Slackware64 and more audio work.