CD's are digital. They are just a physical copy of digital. As opposed to vinyl, which is analog and continuous. CD's are sampled and require a DAC to create the analog signal from the data, just like streaming or mp3. The DAC does filtering to smooth the gaps between samples. Different DAC's will produce different results and sound a little different, unlike vinyl because it's already analog. A lot of people think vinyl is more pure because of this. But I suspect most people buying vinyl are just doing it because they want to be hipsters.
People are familiar with Xiph right?
The guys behind the Ogg Vorbis and Opus codecs.
people.xiph.org
"...Sampling theory is often unintuitive without a signal processing background. It's not surprising most people, even brilliant PhDs in other fields, routinely misunderstand it. It's also not surprising many people don't even realize they have it wrong.
The most common misconception is that sampling is fundamentally rough and lossy. A sampled signal is often depicted as a jagged, hard-cornered stair-step facsimile of the original perfectly smooth waveform. If this is how you envision sampling working, you may believe that the faster the sampling rate (and more bits per sample), the finer the stair-step and the closer the approximation will be. The digital signal would sound closer and closer to the original analog signal as sampling rate approaches infinity.
It might appear that a sampled signal represents higher frequency analog waveforms badly. Or, that as audio frequency increases, the sampled quality falls and frequency response falls off, or becomes sensitive to input phase.
Looks are deceiving. These beliefs are incorrect!
All signals with content entirely below the Nyquist frequency (half the sampling rate) are captured perfectly and completely by sampling; an infinite sampling rate is not required. Sampling doesn't affect frequency response or phase. The analog signal can be reconstructed losslessly, smoothly, and with the exact timing of the original analog signal.
So the math is ideal, but what of real world complications? The most notorious is the band-limiting requirement.
Signals with content over the Nyquist frequency must be lowpassed before sampling to avoid aliasing distortion; this analog lowpass is the infamous antialiasing filter.
Antialiasing can't be ideal in practice, but modern techniques bring it very close. ...and with that we come to oversampling..."
In the footnotes:
"[The Sampling Theorem] hasn't been invented to explain how digital audio works, it's the other way around. Digital Audio was invented from the theorem, if you don't believe the theorem then you can't believe in digital audio either!!"
Also see:
wiki.xiph.org
"...So where'd the stairsteps go? It's a trick question; they were never there. Drawing a digital waveform as a stairstep was wrong to begin with.
A stairstep is a continuous-time function. It's jagged, and it's piecewise, but it has a defined value at every point in time. A sampled signal is entirely different. It's discrete-time; it's only got a value right at each instantaneous sample point and it's undefined, there is no value at all, everywhere between. A discrete-time signal is properly drawn as a lollipop graph. The continuous, analog counterpart of a digital signal passes smoothly through each sample point, and that's just as true for high frequencies as it is for low.
The interesting and non-obvious bit is that
there's only one bandlimited signal that passes exactly through each sample point; it's a unique solution. If you sample a bandlimited signal and then convert it back, the original input is also the only possible output. A signal that differs even minutely from the original includes frequency content at or beyond Nyquist, breaks the bandlimiting requirement and isn't a valid solution.
So how did everyone get confused and start thinking of digital signals as stairsteps? I can think of two good reasons..."