V-sync, refresh rates, TFTs, etc.

Reading this post in the Skyrim thread, something I wondered is why we still have the concept of a refresh rate, at least on TFTs.

I understand why we used to have them with CRTs, for each video mode it was how many times the monitor would be able to draw an entire frame, and I thing I know the relationship with v-sync.

What I'm wondering is why we still have refresh rates on displays that don't have the same characteristics. There's no electron guns scanning an entire screen, the concept of them seems to be a legacy of CRTs that will probably hang around for decades and probably no one will really remember why. Why aren't they truly flexible?

I can understand upper limits for what a display is capable of, only so much data can be passed through a cable, individual elements on a screen can only switch so fast, but I think there's much to be gained at the lower ends of performance, especially seeing as we're not in a utopia of universal (insert high number here) frames per second media yet.

If something is being displayed at 24fps, it updates the display (or part of the display) at exactly that rate, no pulldown required. If a 3D renderer can't display at refresh rate or a whole number divisor of the refresh rate, who cares, it updates the display at the exact rate it's capable of. How about having a small section of the display updated at a high fps and the rest of the display lower, to prioritise data the cable can carry where it's needed (which I wouldn't be surprised if there's something like this already).

Get rid of the concepts of v-sync and refresh rate because everything is essentially synchronised. Am I barking up the wrong tree here?

Scratched wrote:

Reading this post in the Skyrim thread, something I wondered is why we still have the concept of a refresh rate, at least on TFTs.

I understand why we used to have them with CRTs, for each video mode it was how many times the monitor would be able to draw an entire frame, and I thing I know the relationship with v-sync.

What I'm wondering is why we still have refresh rates on displays that don't have the same characteristics. There's no electron guns scanning an entire screen, the concept of them seems to be a legacy of CRTs that will probably hang around for decades and probably no one will really remember why. Why aren't they truly flexible?

I can understand upper limits for what a display is capable of, only so much data can be passed through a cable, individual elements on a screen can only switch so fast, but I think there's much to be gained at the lower ends of performance, especially seeing as we're not in a utopia of universal (insert high number here) frames per second media yet.

If something is being displayed at 24fps, it updates the display (or part of the display) at exactly that rate, no pulldown required. If a 3D renderer can't display at refresh rate or a whole number divisor of the refresh rate, who cares, it updates the display at the exact rate it's capable of. How about having a small section of the display updated at a high fps and the rest of the display lower, to prioritise data the cable can carry where it's needed (which I wouldn't be surprised if there's something like this already).

Get rid of the concepts of v-sync and refresh rate because everything is essentially synchronised. Am I barking up the wrong tree here?

I'd guess that this is either to do with the amount of bandwidth between your graphics card output and monitor's display. Or if the buffer in the monitor is smaller than the amount of data required to fill the whole screen (then you're going to have to refresh 1 portion of the display at a time). And, previously, if you're doing that then you'd probably engineer the system to be backwards compatible with the way it had to happen with CRT monitors. We're probably past the point of you're monitor's buffer being (or having to be) so small or the bandwidth between gpu and monitor being restrictive but we're stuck with the current solution for (legacy) interoperativity reasons. Someone with more monitor knowledge probably has a better/correct explanation.

Having said that, even if your screen buffer in the monitor was v.large data has to come out of it in some order. because I'd guess some kind of massively parallel screen update process would just be too complex/expensive to engineer.