On the Pimax, a problem some experience is that they can never seem to get both eyes fully in focus using the headset. My question is, could real time distortion correction nullify this issue? Wouldn’t it be possible to apply distortion correction in such a way to make the image look sharp anywhere on the lens, simply by distorting the image in such a way that the lenses distort the image back into focus?
This sounds like more of a job for an automated IPD system. Obviously the pimax has manual IPD control initially, so with eyetracking it might not be too big a job for an initial onscreen prompt to guide the user to manually adjust IPD to a point it suggests based on detection and then it automate a software IPD offset to correct anything from there?
Maybe??
Hmm maybe. 20characters
No. Focus mismatch can not be compensated for in software - that takes improved optics.
What you could do, is to make the distortion correction adjust for asymmetry, so that the projection looks right even if you have moved the lenses closer together, in order to get to look through the centres of both of them simultaneously as long as you only stare straight ahead, so that you get equal clarity in both eyes, even though then you’ll be looking through them slightly sideways, still mirrored, and will have significantly narrowed the field of clarity. That much could be done even without eyetracking, but having it dynamic should be beneficial in any case.
In addition, as spamenigma said: If one added a pair of rapid servos, it shouldn’t be totally out of the question that you could have the lenses shift sideways automatically, due eyetracking, so that they are always pierced through their centres by your gaze. :7
We were actually testing an eye tracking based distortion management system in our booth at CES. I had not seen it myself until CES but did spend maybe an hour trying it out with different things. We were testing it using an old HTC Vive Pro and it’s amazing what such a system can accomplish even on portions of the screen that already seem sharp. The overhead was rather small.
It would be a licensed solution but it’s something that could potentially be implemented alongside the existing universal DFR. A lot of practical innovations could spring from eye tracking.
Interesting.
When Pimax added the IPD offset setting to PiTool, it was suggested this was in part in preparation for dynamic distortion correction…
Now, what you say above (licensed solution, and all that) implies that the answers to all the questions I now sit poised to type is: “no”, and your’s is not the right table to land them on - sorry about that, but they are long-held items of curiousity, and have reactualised through recent threads, so I’ll try them anyway :7 :
Do you know whether the offset is only a simple translation of the image on the screens, or whether there has been any effort to shape the correction to match how the distortion changes when looking through the lens off-axis? (EDIT: Of course… To be correct, the transformation would still need to know where the viewer’s pupil is, which remains an unknown factor, but guesses could be made… :P)
To go one step more basic from the above: Do any of Pimax’s distortion profiles make any such effort to begin with, without any offset, or are they strictly symmetrical?
Pimax mentioned at some (rather late, I can’t resist saying), point, that they had built/bought/commissioned equipment to measure and calibrate the optical properties of their headsets. Do you know whether these have been used to study the effects of off-axis viewing with any Pimax optic setups, for better distortion correction profiles, whether they take asymmetry into account or not, and whether or not the viewing point is fixed or moving?
Here I am going to make an assumption that the current shader that performs the distortion correction is (for performance reasons) a fixed viewpoint operator (perhaps even just using a simple look-up table (offset map)), that is calculated and compiled using values from the distortion profiles, at startup, and whenever any of the values changes (including IPD), rather than a full complex algorithm that calculates each and every pixel location every time… Does this sound anywhere near the truth?
If so; Any estimate of how much more complex the shader would need to become to accommodate a moving pupil? (supposedly rather simple with only translating the symmetrical distortion - less so with any second order distortion…)
Apparently the external system you tried had little overhead, which is promising.
Sorry for the inquiring-minds overload.
Dynamic Distortion correction with Eye Tracking.
Cool. Dynamic distortion correction should make the whole lens relatively clear.
Hmm, why exactly? Can’t you just warp the image on screen so that when it travels through the lens it gets warped back to normal?
“Dynamic distortion” or better dynamic undistortion, provides exactly the same thing as “static undistortion” - i.e. correcting the image geometry which would otherwise get warped by the lens.
This is completely unrelated to the focus. While you might have seen it in the movie, there is no (algorithmical) way to refocus a once blurred image.
The distortion correction performs such a warping operation, to correct for the way different parts of the image are being stretched out and compressed by the lens, left-right, and up-down. This is a relatively simple displacing job: “This pixel has moved over here”.
Focus, however, is a lot more complicated.
From the point of a pixel on the surface of the screen, its light radiates in every direction. When you are focussed on the screen, the light from all these directions ends up projected through the lenses in the HMD, and in your eye, onto the same spot on your retina, and you get a clear image.
If you move the screen farther away from the lens, however, the range of “travel paths” for that light, that were previously converging on that pixel, will keep going, and fan out again, now mirrored, as the ones from the top and the ones from the bottom cross paths, and when they do eventually intersect the screen, they are no longer covering a single pixel, but several in a radius around it, and what ends up on your retina is the sum of these. A picture comprising a single black pixel on white background on screen will reach your retina as a grey blob, and if I try to compensate for this by modifying on-screen pixels around the blurred one, maybe using hypothetical “negative light”, that action will also modify every other pixel around the modulating ones, including, of course, themselves.
It is not entirely unlike the notion of recreating a source photo from the result of applying heavy gaussian blur to it, by means of changing the output so that the original comes back when passing it through the blur filter a second time – the process is destructive.
Now, if you can figure out a way to calculate a “regular image” interference pattern (ála a hologram, but without the resolution and lighting conditions, and other considerations, that comes with holography), that can solve the problem, there are a lot of people that will want to shower you with money and fame. :7
The reason the image is sharp in the centre, and grows blurrier the farther out you go toward the periphery, is that the screen is a flat plane that stretches away into the distance, whereas the focal plane of the lens curves - they soon part ways. There are different methods to get around this problem, physically.
Now, had we had a lightfield display, where every, let’s call it “pixel”, radiated different light in different directions, so that it can truly represent all light that passes through the cross section in the air of the world that it samples, then we could refocus by shifting values around, quite similarly to our current distortion correction.
Ok… Not sure how coherent that ended up…
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.