Reading this post in the Skyrim thread, something I wondered is why we still have the concept of a refresh rate, at least on TFTs.
I understand why we used to have them with CRTs, for each video mode it was how many times the monitor would be able to draw an entire frame, and I thing I know the relationship with v-sync.
What I'm wondering is why we still have refresh rates on displays that don't have the same characteristics. There's no electron guns scanning an entire screen, the concept of them seems to be a legacy of CRTs that will probably hang around for decades and probably no one will really remember why. Why aren't they truly flexible?
I can understand upper limits for what a display is capable of, only so much data can be passed through a cable, individual elements on a screen can only switch so fast, but I think there's much to be gained at the lower ends of performance, especially seeing as we're not in a utopia of universal (insert high number here) frames per second media yet.
If something is being displayed at 24fps, it updates the display (or part of the display) at exactly that rate, no pulldown required. If a 3D renderer can't display at refresh rate or a whole number divisor of the refresh rate, who cares, it updates the display at the exact rate it's capable of. How about having a small section of the display updated at a high fps and the rest of the display lower, to prioritise data the cable can carry where it's needed (which I wouldn't be surprised if there's something like this already).
Get rid of the concepts of v-sync and refresh rate because everything is essentially synchronised. Am I barking up the wrong tree here?