It is clear that we could do fixed foveated rendering, even on the Vive & Rift, if we wanted to, hence why I said „unless it is to reduce the GPU workload“.
Granted, these two HMDs would not lean themselves for it as much as the 8K due to their narrower FoV, but still.
In theory that would be possible today, didn‘t NVidia say something a year ago or so about allowing such rendering with fixed high-res and low-res areas ? But then of course you have the issue that if you turn your eyeballs all the way to the side you will not see a relatively sharp picture where it should be sharp.
Admittedly I don‘t expect that to happen that often, perhaps in sim racing because you still want to keep the peripheral vision on the other side in certain scenario‘s (opponents on your left and right hand side).
Usually you will turn your head rather than squeeze your eye-balls all the way into the corner, so it may be an acceptable sacrifice.
So if we are talking about that kind of trickery, then I‘ve got more food for thought for you. I already thought about this one a couple of months ago: why not take it a step further, and not just render the outer say 30 degrees (as some of you suggested) in low-res - just measure the sweet spot of the 8K lenses (may be different per user due to factors like eye-lens distance) and render everything outside that sweetspot in medium-res or low-res. Due to it being in the blurry/distorted area of the lenses anyhow it may not matter that much if it comes as low-res.
Yes but low res creates those blocky steps which your peripheral will pick up extremely well. To avoid that we need to blur it to make it as soft as possible. So the gain is completely on the render side, bandwidth is still fully used because the blurred area is full res just like the rest.
One thing i just started wondering while reading all the post. If we blurry the edge of the screen to save resolution(graphical power) where our eye already see it as blurry will the blurriness actually get much worst?
What i mean by that is that in real life presently we do see blurry on the edge of our vision but what happen if what is been shown on the edge of the vision is already blurry. Does a person walking out of your focus actually become difficult to distinguish as a person. Does looking at leaves out of focus when you lower the resolution of the edge make you unable to distinguish anything but a big green blurry thing?
In a more technical way would a 4k pictured lowered on the edge of focus to 1080p look like a 480p picture while the 4k picture on the edge of focus unchanged actually look like 1080p?
Those are random number just to illustrate my question. But i think it would be worth testing.
The point of the blurring of the edges is to do it because it doesn’t matter. Our eyes will never reach there, it will simply be in your periphery view. You will have to turn your head to look there, which in turn brings it out of the blurred area and into a sharp area. But movement would be preserved, albeit blurred but changing none the less
What is more important than the blurriness itself, is the transition from clear image to blurry image. It needs to “feel” seamless. Thats a tricky part as well!
It’s not just the blurring. There needs to be better visual fidelity in the areas with less blurring. In other words, you probably need nested zones, with the best, highest res, in the center, progressively getting coarser towards the outer zone, then the gradient blur can be applied. You don’t want to be able to discern the transitions between zones.
Foveated rendering has already proven itself usable. So the blur outside fovea area works well. The fixed blur at the edges will be always outside fovea area because your eyes just can’t move further than a certain point.
If we go by last official update of 80% screen utillization(until proven or disproven)
Lens focal to screen is likely the same as the 8k & non plus 5k. If the screen in the plus is yeilding +9% sharpeness due to higher ppi being smaller. Then we can assume that the 5k+ should have a higher utilization of approximately up to 89%. & would also have a potenial up to +9% increase in main viewpoint etc.
I would guess they will likely potenially go the same route on the 8k-X being that it’s only 400 units. Slightly smaller screen for increased utilization.
So the smaller panel has more panel utilization. That’s exactly the whole point indeed. The lens can see (much) less of the 4k’s panel than the 5k’s panel.