Ok hear me out DLSS for 8k headsets!

Omg! I just went there!

I would suspect the tensor cores would have to be in the headset itself and add to the power usage, especially for the wireless version.

But think of it, you can have native 8k resolution for games that support deep learning supersampling without the crutch of video bandwidth constraints, and regular supersampling for everything else. And don’t even let me get started about FOVIATED rendering.

Nvidia and Pimax teaming up would be a dream come true, and we consumers could only benefit from their close integration

What do you guys think?

3 Likes

Pssst! Nvidia & Amd are on the kickstater page as partners.

6 Likes

I think that adoption to a proprietary closed AA format is a VERY DANGEROUS move.

Also, the true king for AA in VR is SuperSampling and not some “auto-blurry-bot-enhancer”.

4 Likes

The beauty of dlss is two. The performance gains are big, and it seems to look way better. My biggest interest with the rtx cards is the huge bandwidth for nvlink sli means game agnostic vr sli alternate frame rendering I believe is possible. Pair alternate frame rendering with brainwarp and foveated rendering and you’d be one happy pup.

1 Like

Well that’s the interesting thing. While a game itself might not have mgpu support & not take full advantage. If the vr compositor supports it on it’s end. Once the vrcompositor has control it can on it’s part use some of the benefits.

In the steamvr logs shows evidence of using vrwerks & liquidvr. At least with the steamvr compositor

I was quite excited about the possible rebirth of mGPU by means of NVLink but just as disappointed when I found out the all-important function of memory pooling will not be supported by non-Quadro cards. I understand why nVidia does it, but as it stands the TitanV is still the best card for VR as of right now. Here’s to hoping there will be a Titan TU with a sufficient framebuffer in relation to GPU power.

2 Likes

That is disappointing, and a big loss. I was worried about the 2080ti once I saw the Quadro specs.

1 Like

Sigh. You can use both dude. DLSS almost completely removes the performance penalty from TAA and other methods. Tom’s hardware review demonstrated that nicely. It’s uses a different core to perform the math. They showed there is no quality loss.

1 Like

I saw the review, but no one dived deep enough to acknowledge “no quality loss”.

Anyway, SS still king, real pixels beat anything else.

1 Like
5 Likes

They dived quite deep on Toms, Anand and other places on DLSS quality. Zooming down to the pixel. Use cases for SS and antialiasing *do not always align.

1 Like

I may have misunderstood what DLSS does. I thought it was supposed to scale up the resolution without loss in fidelity. If that were true then the 2080 would be able to send a low res image to the headset and DLSS would upscale the image to the panel resolution. Or am I just dreaming?

1 Like

fuck now i really really want to see some benches on something with dlss in vr. is this anything out there @SweViver can use?

Unfortunately, you are dreaming. DLSS has to run on the GPU, prior to sending the image to the headset.

DLSS is an AI anti-aliasing technique which is trained on each particular game, using pairs of 64x supersample and non-AA source images.

4 Likes

yes, actually Nvidia suggested the idea since about a long time in some of their vision of the future conference but to make the cost and heat manageable it will only happen on a headset with eyes tracking (for foveated upscaling/denoising). The headset manufacturer would then need to add the Nvidia mobile GPU (which would have tensor core). Current DLSS model however need specific game training and i guess their vision is to use a generic model as a backup but look like we are still many years from having such headset.

2 Likes

Yes, but that is why he said if we had tensor cores on the HMD itself, which is not really for 5k/8k but only possible future.

Also my understanding is same as amadeus1171, DLSS is indeed upscaling with help of AI, that is what brings that huge extra performance (rendering on lower res.). It can also be used without upscaling as better SS/antialiasing technique, but then there is no/small performance gain. Currently limited only to games it learned, but maybe in future AI can be trained for games in general.

Now, if we are just dreaming up future, another use for AI could be some intelligent re-projection method, where it could perhaps guess the in-between frames with better accuracy.

2 Likes

This would be awesome but also possible to do at GPU level. No need to render the entire frame, just the movement and fill in the gaps intelligently.

1 Like

in that aera, i would love to see how Texture Space Shading improve the re-projection first before going the AI way.

1 Like
1 Like

Then I was right! Render a lowres image on the card and upscale it using DLSS in the headset. I wonder what it would take to integrate tensor cores in a headset…

If this is possible it would be a game change. You can run your computer with a potato video card and have DLSS upscale the image in the headset to hithertofore resolutions. This wouldn’t be limited to HMD’s. Hires 4k, 8k, 16k displays even can take advantage of DLSS with tensor cores in the monitor.

2 Likes