Admittedly I'm not a filter expert and my poor vocabulary undoubtedly gave the wrong impression that this was a 'straightforward' problem. It's not, but I'm trying to offer my understanding to try to shed some light on the situation and eliminate some of the fear of analog technology. I believe this is a problem that can be solved with some compromise.
A 'good' ADC has an anti-aliasing filter that will do the job. If it doesn't have one built-in then you add one to make it a 'good' ADC. The anti-aliasing filter needs to attenuate unwanted frequencies such that when those frequencies are aliased by the ADC they do not significantly alter the signal.
The design of these filters is out of my scope, but generally we care about the cutoff frequency, stopband frequency, and stopband attenuation. The amount of attenuation and the stopband frequency determine the required rolloff characteristic of the filter. We design for this depending on the information we are digitizing, the tolerance of the signal to distortion and how that distortion affects the perception by us.
At this point we go to the digital domain. We will likely need to resample the data to the Nyquist frequency for the purposes of space saving. For this we usually use a brickwall filter. I've always used 'brickwall' to refer to FIR approximations of the unrealizeable sinc filter. If I was suggesting an impossible filter I would've stated 'sinc'. That said, there are applications where Gaussian and other filters should be used instead.
Perhaps you are misconstruing my lingo as exact in place of good enough. If good enough is not good enough, and you want an absolute perfect preservation of an analog medium, then of course there is no practical way of digitizing the signal. At that point the problem will be unsolvable.
It's best to clarify what we want to preserve by specifying which of those four numbers I outlined would be best desirable (or something else I didn't consider). They go in order from most difficult to simplest, where 1 is the most difficult.
We can also look at these by dissecting a LaserDisc player:
1 leaves only the platter and laser reader mechanism. We store that 'laser data'
2 adds part of the demodulation circuitry, but everything is still one signal.
3 adds the filters to separate the video from the audio and also demodulates them.
4 is essentially reading the composite output and applying rec. 601 on it, with the addition of saving that VBI data.
The only reason I can think of for doing 1 is if we wanted to start adding LaserDisc players to MAME (or do we have some already?) Otherwise my vote is for 3. Capture the outputs from a good LaserDisc player. It's a solved problem and we can read the VBI data in real-time like the original hardware, unless I'm misunderstanding something.