I’m not talking about features for us happy backers, I’m talking about technical software possibilities for a 200 fov headset in a 110 fov headset universe. And in the case of having a 970 or 1070, this option could give you better quality for games where you want that and are ok with sacrificing a bit of FOV to get there. Just because we can.
Do you mean something like a software based slider that allows you to pull in the FOV from Max 200 to Min 120 so retaining the super sampling but cropping the fov?
Basically. So to either or both save bandwidth or GPU load. And maybe play games which simply won’t work with 200 fov.
The best possible solution is if the scaler could also handle a 1080 signal as well as the 1440p and upscale that. This would a true relief on the GPU
I am not sure how it would work. Fov is just how big the view is. If you tell a game engine that your hardware supports a resolution of X by Y and it renders out at that resolution then the HMD will still scale it to the full panel size. Ok, so super sampling can overide that “resolution”. But there is more,
It would need to somehow say my resolution is (native) and then super sampling overide is nn but still only render nn% of whatever that is. This would result in a full frame render but would have black pixels in it where it did not calculate/render.
You could render the image in a smaller aspect ratio (x/y) and overlay it with a black image at the correct resolution then send it to the hmd for upscaling.
I have more time tonight to draw out what I mean but @Sofian gets it I think
True, but that is not really different from reducing the super-sample size (as far as the GPU load is concerned). It would greatly reduce the bandwidth however and might even allow for a 90 Hz refresh.
I thought I had it right…
Say your native input resolution is 2k
Super sampling overrides that which results in a 1.5k resolution image
Game engine now renders at 1.5k
But that is not what you want as you still have a SS render with no FOV crop.
You need this:
Native Input resolution is 2k
Super Sampling is 1.5k
- FOV crop of 20%
Now the game engine would calculate 1.2k resolution but output a 1.5k image (with black pixels filling 20% of border)
The HMD receives what it expects and simply displays it (with its own upscale pass).
This way you handle SS in the mix, the engine renders the optimised amount of pixels and the HMD displays it as normal.
?
So I could up Super sampling to 2.0 for extra extra sharp text but then tell it to FOV crop by 50% so its like a Rift for FOV but with an even sharper image than the the default 8K upscaled resolution.
Or I could say 0.6 x super sample for 40% reduced resolution with no FOV crop which is what we can do now.
Or a 1.0 SS (no change) but with a 30% FOV crop to give me 30% reduced calculation at no change in sharpness but with a 30% fov reduction.
The best option is to have all game engines supporting multi resolution. UE4 and unity already support it.
As I explained in a previous post, there are at least 30° on each side (60/200) where we don’t need full resolution since we can’t have fovea resolution there. Not only the GPU It could also help the upscaler.
Is that Fixed Foveated rendering? As in the resolution is calculated native at the fovea but reduces 2 or 3 more times out towards the peripheral?
As that is one solution yes but if you keep your head still and move your eyes you end up looking into the lower res areas.
Yes but we are in a different situation than the Go.
Because of the small fov of existing hmd, fixed FR means you can see part of the fov rendered at a lower resolution.
As for the pimax I am talking about lowering the resolution of the part of the image our eyes can’t see at full res anyway, so you don’t lose anything here.
For instance, if you are in front of a monitor, turn your head enough to have your monitor still in your fov, at the periphery, stare as much as you can toward the screen, you can see the screen but you can’t read anything, this is what I am talking about. This part of your fov will never benefit from full resolution.
This situation doesn’t exist for existing hmd since the fov is so small, you need fovea res everywhere.
Im losing interest (paul daniels caracter no.3)
That’s quite interesting. But if I do what you say I can still move my eyes and then read (only just) what was originally in the peripheral without moving my head. I do wear glasses though so that could be why. It would be a great question to ask the testers to backup your (I assume) theory.
Respectfully, I think your premise is incorrect.
The Pimax 8K apparently has a huge sweet spot which goes nearly to the edge of the view. While looking straight ahead, you are correct, that outer area is only seen by your peripheral vision and small details are lost. HOWEVER, you don’t have to look ahead, if you glance to the left (or right) that outer area becomes readable, since your fovea is now focused on that area.
We would need foveated rendering for this to be a reasonable approach.
No he means those last 10-15 degrees to either side will never be in your fovea and therefor will never have to be rendered at 100% clarity.
I understand, but that area WILL be in your fovea, if you glance to the left or right.
Look as far as you can to the left, notice you still have peripheral view further to your left while your eyes can’t go any further. THAT area will never be in your sharp fovea and always in peripheral.
Ah, I see. That’s such a tiny area, I’m not sure it makes a lot of sense to try to render it at a lower resolution. Remember, the compositing step (combining both hi-res and low-res images) does take some time.
Well that might be true. We were just spouting ideas.
As for the fov slider, I thought of these options:
If we can have a slider that shrinks fov from 200 to 0, or lets keep 110 as minimum. Then assuming we don’t in- or decrease bandwidth, we will demand less view space from the GPU to render but it will be surrounded by black pixels. This then includes the eventual upscaling in the 8K.
So with this slider setting we can trade pixel quality for fov in the GPU because a smaller area needs to be rendered and we now have extra time for that. Also useful for when games can’t work in 200 fov.
Now if the data stream through the cable can also be adjusted to the x,y of the new fov (and the scaler chip can understand this setting), we can also increase FPS or even increase pixel area. Which might sound confusing but would mean something like sending actual 4K data, instead of 4K data that needs to be upscaled to 200 FOV over 8K pixels. In this case 4K data would be shown as 4K native in the center. Making the Pimax 8K (with fake 8K) also a Pimax 4KX (with true 4K)