There is another blog on the subject of audio band pass filtering, but I don’t wish to review any of that, and prefer to start a fresh look at the subject.
There is another blog on the subject of audio band pass filtering, but I don’t wish to review any of that, and prefer to start a fresh look at the subject.
There are serious reasons, based in the laws of physics, why audio sometimes needs to be “confined” to the bandwidth of the medium toward which it is directed.
AM radio is a good starting point, knowing that the bandwidth of an AM channel is 10kHz (in the Americas). For my frequency of 1550kHz, for example, this means that my channel occupies 1545 to 1555kHz, centered on 1550kHz. This means that there are 5kHz negative and 5kHz positive, requiring an actual audio bandwidth of 5kHz to fit that channel like a glove.
The exact state of the FCC rules on AM bandwidth have been stretched by the application of IBOC (HD-digital), which spreads a station’s footprint significantly so that it blots out three channels instead of one.
I tried running a 20kHz audio stream on 1550kHz AM, and the splash-over was messing up 1570 AM, a station from 40 miles away that can be heard here albeit very weakly. I found that I could run a bandwidth of 15kHz, which splashed right up to the edge of the 1570 signal, which begins at 1565kHz.
On shortwave the spacing of radio channels is only 5kHz wide, requiring a bandwidth of 2.5khz audio.
My newly increased audio stream at 48kbps seemingly allows 24kHz audio bandwidth, although the sample rate of the stream is 44.1kHz, and I’m not sure how the two key numbers that define a digital streaming audio channel interplay with each other.
My low bit rate stream at 16kbps, sample rate 11.025kHz is a whole other concern.
The audio signal flowing from the Winamp Playlist is processed by Stereo-tools, a highly professional software with a very precise bandwidth control, but it only controls one audio stream, and right now four different band pass filters are needed in this radio station.
Time to declare a new project, the design of circuits that can be implanted at the input of part 15 transmitters to control their carriers as appropriate.
This still leaves the question of needing an extra software band-pass control for online streaming, but the part 15 half of the problem can be solved by outboard circuits.
radio8z says
KB/S and BW
There is a bit more than meets the eye on digital audio. The highest audio frequency which can be faithfully reproduced is one half the sample frequency. So, for a 44.1 kHz sample rate (frequency) the top audio frequency is 22.05 kHz. If the audio contains components greater than this frequency these are aliased and appear “folded back” from the sample frequency and cause distortion. To avoid this anti aliasing filters are used but these have to be used BEFORE the audio is digitized.
Consider bit rate. This is the product of the number of samples per second and the number of bits in each sample. For 44.1 kHz sample rate 16 bit audio the bit rate is 44.1k X 16 = 705.6 kb/s. Double this for a stereo signal.
Now things get complicated. To reduce the bit rate compression schemes such as MP3 take advantage of pyschoacoustical attributes of human hearing and blank and redundant information in the audio which can be eliminated without perceptible effect. By this means the bit rate is drastically reduced.
Because of compression it is not correct to assume that the audio bandwidth is linearly related to the bit rate.
Points to ponder.
Neil
Carl Blare says
If I Understand
What you seem to have said, Radio8Z, is that for proper streaming rates the audiofiles themselves need to be digitized…
A. To match the streaming rate, i.e., 16kbps, 11.025kHz;
OR
B. Uh, what?
If “A” is the way, then I’d need two sets of audiofiles, to match the two different stream rates.
Yet, the player-streaming software can be set to send two streams with different settings from one playlist. Is that a bad way to do it?
radio8z says
More kb/s
No, that’s not what I am saying. The streaming software can handle the conversion from whatever kb/s source file to the final streaming rate.
For example, I can convert PCM WAV files to whatever MP3 bit rate I choose. I can also convert a MP3 at a certain bit rate to MP3 at another bit rate. The software handles this by discarding information as the bit rate is lowered. The lower the bit rate the more is discarded and the lower the quality.
I wasn’t trying to give any warnings or anything like that, rather was trying to give some background information.
As this would apply to AM, you mentioned controlling the AM bandwidth and this is best done by filtering in the analog domain between the source and the transmitter but it also can be done in the digital domain with filter software. Point being that setting the digital bit rate to a certain number to establish the analog bandwidth may not be straight forward because of the compression. My approach would be to use a high bit rate and apply a software low pass filter and let the filter set the AM bandwidth.
Neil
Carl Blare says
Bandwidth Bandpass
Reset.
Same subject as above, but taking a different look.
My title, “Bandwidth Bandpass,” serves to clarify the scope of a new discussion which will lead to a fresh question:
“Bandwidth” applies, for example, to the frequency response range of source audio, i.e., the bandwidth of a CD is 44.1kHz, which presents a useful frequency range of 20 – 20kHz.
“Bandpass” applies, in this case, to the destination toward which the audio is being sent, i.e., if sent to an FM transmitter the “bandpass” will be 20 – 20kHz in the case of a mono transmitter, but 20 – 15kHz for stereo transmission; audio bandpass of an AM transmitter is nominally 10kHz wide, frequency response 20 – 5kHz; and for digital streaming bandpasses might be 40kbps (20 – 20kHz response); 46kbps (20 – 23kHz); 56kbps (20 – 28kHz).
The bandwidths and bandpasses mentioned so far are realistic because they take into account the actual range of human hearing, typically 20 – 20kHz.
But much hype is given to ultra wide-band sampling and streaming well above the top of human hearing. What’s the point of that?
Does a sample rate 0f 320kHz or a bit-rate of 320kbps achieve anything other than wasting drive-space and internet band-width?
radio8z says
Sampling
Wish I had a blackboard for this but here goes anyway.
The math behind the Sampling Theorem is what is known as the Nyquist Sample Criterion. This says that the minimum sample frequency needed to reproduce the signal is twice the highest frequency in the signal but this can be a bit misleading.
You can determine the highest frequency in a signal which is half the sample frequency but you cannot necessarily determine the original waveform. If you are sampling at 44.1 kHz and get a 22.05 kHz signal you know that the frequency is accurate but you do not know the shape of the original waveform since all you get is a voltage going point to point up and down at 20.05 kHz (a triangle wave). The only safe assumption is that it is was a sine wave since sine waves have no harmonics which would be above 1/2 the sample frequency and technically this would not exceed the Nyquist criterion. But, what if the waveform sampled was a square wave at 10 kHz? This is below 1/2 the sample frequency but its third harmonic (square waves have only odd harmonics) is above 1/2 the sample frequency and will fold or be distorted unless it is filtered out before digitizing. Hence, the square wave at 10 kHz will get through the system as an approximate sine wave. If the sample rate is raised sufficiently then the higher harmonics of the waveforms can be sampled and the original waveform can more faithfully be reproduced. The triangle wave will become more like a sine wave on playback.
Since humans cannot hear above 20 kHz then it can be argued that it is not necessary to reconstruct waveforms using these frequencies but is this factual? Most people can tell the difference between the note A played on a piano and the same note played on a trumpet. The fundamental frequency is the same but the timbre is different because the relative strength of the harmonics is different between the two instruments but probably the harmonics above the range of hearing are not important for this discrimination.
Audiophiles with whom I have discussed this are convinced that they can hear the difference between analog and digital and between differing digital sample rates. It would be interesting to test this (probably has been done) with a double blind study.
Neil
Rich says
Audio Masking
I’ve read some comments written by Bob Orban stating that audio components meeting certain criteria and produced by distortion of the original waveform cannot be heard in double-blind tests even by golden-eared listeners. They are “masked” in human perception by the other audio components of that waveform.
This is the basis for the use of data-reduction algorithms such as MP3 etc to reduce the size of digital audio files compared to linear processes such as the WAV format.
Bob Orban is a leading designer and manufacturer of broadcast audio processors, and is generally considered to be an authentic audio “guru.”
Rich says
More
‘They are “masked” in human perception by the other audio components of that waveform.’
I should have added that these distortion components all were within the ~20 Hz to ~20 kHz audio spectrum, and would have been audible if they were present as standalone waveforms — not masked by other audio components.
Carl Blare says
What Would Orban Say
Orban is absolutely one of the great masters of electronic audio reproduction.
Given the details offered by Neil Radio8Z and Rich, can I conclude that streaming rates of 46kbps on upward serve no useful purpose for human audio and consume bandwidth with no added benefit?
Rich says
Audio BW
can I conclude that streaming rates of 46kbps on upward serve no useful purpose for human audio and consume bandwidth with no added benefit?
You could, but those higher data rates can have some advantages in master recording and production processes.
Carl Blare says
How Odd
No, I don’t mean your comment is odd, Rich. What’s odd is the fact that I added an additional note here hours ago but it’s gone. The NSA must have made another mistake.
What I said was, yes, I agree that producing master recordings in a stable and lossless format is very wise… commonly WAV at 44.1kHz is used for mastering.
WAV files can be re-edited without losing or changing the quality of the original recording.
My earlier questions were focussed on streaming in MP3 format, where I continue to wonder what reason there might be for extra high stream/sample rates, which some stations insist make it superior. I think any sense of “improvement” is a phantom of imagination.
Carl Blare says
Answered Yet Asked Again
Neil, Radio8Z, addressed my main point way back in Post # 2, by saying:
“Consider bit rate. This is the product of the number of samples per second and the number of bits in each sample. For 44.1 kHz sample rate 16 bit audio the bit rate is 44.1k X 16 = 705.6 kb/s. Double this for a stereo signal.
“Now things get complicated. To reduce the bit rate compression schemes such as MP3 take advantage of pyschoacoustical attributes of human hearing and blank and redundant information in the audio which can be eliminated without perceptible effect. By this means the bit rate is drastically reduced.
“Because of compression it is not correct to assume that the audio bandwidth is linearly related to the bit rate.
“Points to ponder.”
Finally I have pondered it, and withdraw from my obvious wrong path. My “bandwidth/bandpass” view of life depended on the (false) notion that bandwidth had a linear relation to bit rate.
I don’t think I’m wrong about anything else.