Unnecessary trips through an analog domain
I've railed against that "scripter mentality" which leads to programs which are little more than wrappers for other programs. While there are times when it's the only way to go for whatever reason, I think people use it as a way to be lazy far too often. The alternative is drilling down into the "inner" program, finding out which library it's using to get the job done, and then using it yourself directly.
One of my biggest complaints about such schemes is the unnecessary parsing which happens. Odds are, that "inner" program has been written to yield results in a human-readable format. It might do some formatting tricks to dump things on stdout as nice ASCII text. When someone comes along and decides to wrap such a program, they then have to write the inverse of that formatting stuff in order to parse it and read it back in.
Recently, I encountered a similar situation. There's been a whole bunch of activity in the software defined radio space due to a discovery of a TV tuner which could be coaxed into yielding raw spectrum data. They're collectively called "rtlsdr" devices, and they tend to be little USB sticks which look like thumb drives.
Anyway, someone was trying to figure out how to get two sound cards to work in their system. I couldn't make sense of this. Why would you want two sound cards unless you were planning on doing something really strange? Further reading revealed that they were trying to decode some digital mode like P25 and couldn't do it with what they already had. Their plan was to loop the audio (!) out of one card back into another and then run a second decoder program against that signal.
This just blew my mind. You have a signal which is entirely in the digital domain by virtue of arriving from your USB bus. It's just a raw stream of data, and you can basically do anything you want with it. In this case, they're running some program to filter it down to some chunk of that stream and then demodulate it to audio, and that then is written to the sound card. After passing through the sound card's DACs, now it's sitting at that 1/8" line-out or headphone jack as an analog signal.
The plan then was to take that signal, route it over a short length of wire, and send it BACK into a second sound card. This would then going through the ADCs and turn into a similar (but not identical) digital signal. Then they intend to hand this to something like DSD in an attempt to make it decode as P25 audio and hopefully yield yet another audio stream. This would presumably go out through the second card, this time to speakers.
This is the kind of stuff which happens when you can't or won't access what's actually going on inside a system. If you were to sketch this out, you'd see that you already had exactly what DSD wanted somewhere inside that first program which is decoding the raw I/Q data. It would be whatever it has right before it writes to your first sound card.
Systems like this always make me groan. It's hard enough to get a good lock on some of these signals, and then the decoders are sometimes big hacks, and you're going to send it through an additional D-A and A-D stage? I file that under "how to make a nontrivial task even more difficult". There has to be a better way.
Ignorance is only a temporary excuse. I didn't know the first thing about GNU Radio and things of that nature this time last year. That didn't stop me from going on to build something which automatically decodes a control channel and records every call in parallel and then giving it a web interface.
There are no audio loopback cables in my system. In fact, there are no audio cables at all until one of the calls makes it to your speakers or headphones. It goes straight from a I/Q data firehose to a MP3 on disk. It doesn't even stop over as a WAV file on disk.
Sure, early on, it did write WAV files, but as the joke goes, "I got better".