Sorry for the OT… I saw Gamma Ray as a support for Manowar back in 1993 in Hannover, Germany. I’m a big fan of both, Kai Hansen (ex Helloween) was really great on the guitar and back then Ralph Scheepers (Primal Fear) was on the vocals, an awesome gig… and then Manowar played ‚Agony and Extasy‚ over about 30 minutes and the TÜV Nord could confirm the world record of the loudest band on earth!!! So, that’s to the OT…
Awesome. The Drummer apparently has been known to kill his drums & uses special Stainless steel skins?
Always enjoy Helloween. Though not metal Stratovarious(sp?) Is another pretty cool band.
A bit OT is not necessarily bad. Truth if Warrented we can start a topic in General discussions.
To try to clarify a few things being said here (but will probably end up confusing the matter even worse, instead :P)…
First of all: A) a hypothetical ability to choose one’s FOV, and B) supersampling. Let’s keep those two things separate – they are independant of one another.
As a matter of fact, it might do us good to somewhat abstract resolutions and bitmap sizes altogether, because whilst they, along with FOV, etc, all do “interact”, so that in any formula that contains some of them, changing any one will affect the outcome, it is evidently all too easy to get them so closely associated in one’s mind, that one start to mentally replace parts of mathematical expressions, involving several elements, with something resembling a convenient constant… :7
…so… say I could reduce my FOV from 150° per eye, to 100. This reduces the width of the camera frustum, so I will not be rendering that last 50° unnecessarily, only to then throw it away. I will also reduce the width of the render target bitmap size accordingly, maintaining the same lens-matched render resolution (or supersampling=1.0, if you will), so there is less to render, both in terms of view, and number of pixels to work on. The compositor places the rendered “intermedate images” in their right places on the bitrmaps that will go to the headset, fitted within the area that is covered by the reduced FOV. These will therefore have a wider black box to the side, than they would usually.
No supersampling is involved in the above, but if one wanted, one could use some of the freed up GPU cycles to crank up supersampling a bit.
(Now, hypothetically those widened black boxes on the compositor output images constitute wasted bandwidth, that could very well have been used to transfer a higher resolution, which could saturate the native resolution of the display panels, for the limited area of them that is now used. If the scaler chip could scale a full 2560 pixel wide image, to however much of the screen it takes to cover 100° (instead of the usual 150°), that would mean you could render at a truer lens-matched intermediate-to-final image ratio than usually, for more detail. Render work wise, it would be pretty much indistinguishable from the optional supersampling mentioned in the last paragraph, but I would not call it supersampling at all – possibly more like: “foregoing the undersampling that we normally have”. (EDIT: Great for maxing out detail, with any game that for whatever reason just can’t handle a wider FOV)
As a matter of fact - I have a long standing question (which I am sure some are getting tired of seeing repeated over and over), as to whether there is any of this going on already, with the standard 150° per eye, because if (IF) there isn’t, there is a possible 20% waste of bandwidth and resolution, in the form of black boxes on the pictures that goes from the computer to the headset, due to the lenses not being able to reach the entirety of the screens.)
Then; The amount of peripheral vision that is left, when one have one’s eyes swivelled as far as they can go, to either side, and which the fovea can never reach, is probably a larger arc than one may think, and on top of that: That content is already grossly oversampled, relative to what is in the detail-favoured centre of the view, due to its being so far out to the side, as demonstrated by this picture, from one of doc-ok’s articles:
Notice how the angle α2 is much sharper than α1, whilst the pixels are the same size - you would easily have rendered half width out there, and then fit the result into two pixels, without losing angular resolution, compared to the forward part of the view.
…and this begins to happen already within the limited FOV of the Rift and Vive, which is why you can do “fixed foveated” (or any of the many other names the same concept goes by), for those devices - you should not notice much, if any, reduction in detail, when turning your fovea toward those reduced render-resolution edges.
The compositor’s lens counter-distortion also compresses detail out toward the edges, which constitues waste, Some of the rendering and lens effects even out, between them, others do not :7
Hmm, ok… This post turned out way too long… sorry. Hopefully there was something useful in there, if anybody bothered to wade through it…
EDIT: The embedded picture was from this article, by the way: The Display Resolution of Head-mounted Displays | Doc-Ok.org
i’m pretty sure the 8k uses canted screens that may negate this to some degree.
Forgive my TL;DR
it looks like you are talking about game FOV. That is not what is being suggested I don’t think… There are two Field Of View. One is old-school in game fov with the camera which is what you are talking about and the other is the actual HMD visual area. I thought this FOV slider was about not using the full panel inside the HMD.
To some degree, yes. It is still an average, though, and the same trigonometry relationship between a fixed viewpoint (overlooking pupil translation here) and the flat viewplane still applies.
Exactly. And the in-game camera FOV is matched to the HMD FOV - if you use less of the latter, you reduce the former accordingly; Otherwise your view will be compressed. :7
Yes, my slider idea was much more straight forward and catered to the current 8K setup and render output as seen in one of the latest demo videos where we can see outputs from a Vive and a 8K. In saw the same image in the 8K output plus extra. Meaning more GPU work for the 8K and I wondered what such a software slider could do for effects compared to the max results of a 200 fov standard output.
And indeed my thoughts were very straight forward and I was hoping for detailed technical analyses like these, thanks for that!
The in game camera FOV is whatever it is, they are independent from the actual Field of View of a HMD. A full frame can be in any FOV determined by the developer. It could 10 or 500 on the camera, granted that would look naff but the HMD doesn’t care or know what the camera frustum is, all it knows is that the image is X by Y and game FOV does not change the dimension.
Like I said: The game camera FOV is determined by the FOV of the HMD. The game queries the VR runtime about what FOV to use, the runtime in turn has got this either from the HMD (you can do this yourself, with a Vive and SteamVR’s lighthouse_console; Anybody who has done the GearVR lens swap mod has done this), or from a database of HMD properties (EDIT: …or plugin driver).
If I render a 150 degree view (…and there is nothing to stop me from doing this), and display it over 100 degrees in a HMD, the representation of the virtual world will be out of whack - the virtual world I see, will not line up with the real one I move in, and everything will look very thin. (This is actually something that has come up as something to do deliberately, in order to simulate vision for different animals, or give you superhuman (albeit somewhat disorienting and nauseating) see-all-the-way-around-you vision, for something like an FPS :7)
In addition, we can’t change the in-game FOV in VR like we can with say Call of Duty because in VR there are often 3D positional UI elements. If you suddenly reduced the in-game FOV to say 60 instead of 75 you might find a UI element that was visible is now off screen in XYZ space.
This is how I thought this was being discussed:
|--------------------------|
| |---------------------| |
| | | |
| | | |
| | | |
| |_____________________| |
|--------------------------|
The outer frame is the full video frame. The inner frame is what was actually computed and rendered by the CPU/GPU and placed into the native frame size. This way the HMD scaler doesn’t have a fit with a reduced sized frame. This would effectively give you a reduced HMD fov.
Obviously not quite as shown with the ASCII art as the screen overlap on the inner edge would not be inset but the idea is there.
You should have used Ansi art. Lol
See; We’re both saying the same thing, after all. :9
Your game renders 60 degrees, and places this in the 60 degree square ,within the 75 degree full frame: You have matched the in.game camera to the portion of the real FOV that you are using, and your real and virtual spaces will line up.
Then, in addition I added the thought experiment that the unused “frame” you get there, around the utilised part of the image, could, instead of just being wasted, potentially be used to get the 60 degrees across at a higher resolution, which the HMD would be able to utilise, due to its having the physical pixels for it - we just never take full advantage of them normally (EDIT: …but this would require that the on-HMD scaler could handle the 75-to-60 degree scaling, obviously). :7
Took me ages to make that lol. But yes, Ansi I vaguely remember having to use Alt+255 for spaces on stuff that disabled the space bar back in the day lol
Yep the undocumented space was great for passwords. Oh no never ran into having to use the Alt-### lol
Good old Ansi.sys giving ascii artwork color & animation. A must to give your BBS bling.
/me only ever did PETSCII. :7
Yeah. TV’s and Projectors use this as the Overscan which you can zoom in/out on most sets. You would think a “scaler” could scale anything to its native size assuming the aspect ratio is maintained. Maybe that is one of the issues on the Pimax HMD as there is a limit on how much power is dedicated to the scaler over USB.
My understanding is that it is a VERY simple chip, which can only handle 1 or 2 input sizes. It’s not the sort of chip that’s in an LCD monitor, it’s way more limited.
Do you have any reference? like the chip model or something? would be interesting to have a quick look over it.
[quote=“D3Pixel, post:255, topic:6380”]…
Maybe that is one of the issues on the Pimax HMD as there is a limit on how much power is dedicated to the scaler over USB.
[/quote]
Who knows…? :7
In some manner, something, at minimum, receives-, and separates a 1440p 32:9 stream into two 1440p 16:9 ones, then scales each by a fixed factor of 1.5, and sends them to two separate display panels… I hear MIPI can be a bit of a bastard to deal with… :7
In Unity, if you setup a VR project you get two cameras in scene.
This is roughly what NVidia consider a VR render pipeline:
You can read the full article here:
Oculus for example states
“The Oculus Rift requires the scene to be rendered in split-screen stereo with half of the screen used for each eye.”
And I assume all 3D engines expect this behavior as you can play existing games without having to recompile support for each different HMD. The driver (I assume) will register itself as a step in the lower level render pipeline to handle its own custom brainwarp and distortion, as shown above in the green box.