Its been over 2 years from 1080-2080, and almost 18months for 1080ti - 2080ti, and it was similar for 980-1080. Expect around 2years wait but ultimately nobody knows if another higher performance model will get released sooner or if it will be just 12months for 2180’s
Not long ago there was a speculation about Nvidia “Ampere” architecture, which was supposed to be the “next gen” after GTX 1080 (Ti). Since the new cards are using Turing arch, something already used on high end cards, there is still missing purely consumer grade arch, which would replace current Pascal.
So either Turing is new Ampere, or Nvidia postponed Ampere to 7nm tech, which depending on which subcon Nvidia is using might as well come next year. A lot will depend also on if the new RTX cards will be embraced by gamers or not. In a month time when the benchmarks are released, we will definitely know better.
I think it will likely be MORE than 2 years, simply because nVidia doesn’t have much competition at the high end. This is one case where I would LIKE to be wrong.
At this point, my plan is to wait until the 2080Ti goes on sale or my 8K arrives, whichever comes first (probably the latter). Honestly, I think the 8K will need a GPU at least as powerful as the 2080Ti for adequate performance at high-quality in-game settings. I’m actually disappointed at the predicted performance of the 2080 and 2080Ti in today’s games. I wanted more performance, not a new feature that requires me to drop back to 1080p resolution for an acceptable framerate.
I hear you. Last night I sat down and put in an order for a decent CPU/MB upgrade for now, intending to hold off on the 20x0 series, until (if) we see the price point settle, and developers actually utilising any of the features that make a difference (10x0 series VRWorks promises certainly not forgotten :P).
Kind of wondering just how well the deep learning filling-in-the-blanks stuff (can’t make myself call it “supersampling” …which would be the real performance enhancer in all this), will turn out to do. Can’t help but imagine it should produce recurring patterns, that you begin to notice after a while, from the imagery it’s learned from. A bit worrying in the case of use with medical tomography; So we get a super clean image of algorithmic best guesses, based on what is typically in pictures like these, rather than what is actually there…
I guess the potential non-developer-dependent possibilty, as far as performance goes, would be if the drivers, or an injector utility - targetted or general - could hook themselves into an application’s shaders, and try to rejigger them to forcibly use variable rate shading, behind the application’s back… Could happen? Recipe for disaster? - I wouldn’t know… :7
Just out of curiosity, what was your pick?
For the DLSS it is not yet clear how exactly it will be deployed. Either it could be used to apply anti-aliasing to the rendered image, or it could be used to change lower-res render into higher-res render. The performance figures released recently by Nvidia compare TAA vs DLSS so it suggests they may be rendered at the same res and just “de-aliased” differently, but it looks like DLSS could also render some parts of the picture at the low-res and “up-sample” it to the final res, thus saving the rendering power.
Here is the original press from Gamescom:
https://s22.q4cdn.com/364334381/files/doc_presentations/2018/08/JHH_Gamescom_FINAL_PRESENTED.PDF
with few DLSS comments.
I think that AI filling in omitted rays and DLSS could both be “ok” visually. My guess is that it would only fill in areas that were similar to nearby areas with similar colors. A lot of rays are in areas that don’t actually matter much. The ones you care about most are in the center of the screen and differ from nearby areas. That means you need to devote more rays in that area to resolve the image, while other areas don’t matter so much. Also, nVidia is probably using the last frame as a memory, so that they can ensure temporal continuity. (In general, a ray will look much like the one calculated a fraction of a second before.)
In some ways, this reminds me of dynamic edge resolution, a raytracing technique where a few rays are calculated for a pixel. If the are similar, you’re done. If they have different colors, you create a few more rays for that pixel. If the rays differ a LOT, you create even more rays. Then you average the result. The net result is that you only send a few rays for most of the pixels, yet the resultant image looks like you created a lot of rays per pixel, when in fact, you only used a lot of rays for ~1% of the pixels.
Completely off topic…
There are other cool raytracing techniques that nVidia didn’t mention. One of the coolest I’ve seen is that you trace the speed and paths photons take to reach the target under extreme conditions based on general relativity. This allows you to image things that are moving near the speed of light or in areas with intense gravity (which curves the photon paths), which introduces strange warping visually. This could be applied to games like Elite Dangerous, when your ship is in super-cruise at many multiples of the speed of light or near black holes or neutron stars, which would look insane.
We are talking AI learning overtime, I think with enough comparison the AI could know what barrel looks like and that they are perfectly round hence knowing how to draw it even with missing pixel. Maybe that might still be too advance for AI but they did show that the AI could take a few 2d line and create how it should look in 3d, they also showed that it could recognize what hair is and changed the color of hair only at will. So it seems like they possess a pretty good AI.
That effect looks pretty cool for game to show speed or teleportation indeed.
Years ago, when I was in college, one of my specializations was AI. There are lots of cool things you can do with AI, but the field has really advanced recently. One of my projects was a limerick generator in Lisp (an AI language). One of the most interesting Lisp AI programs was TaleSpin, a program to generate simple stories, similar to “the scorpion and the frog”.
https://grandtextauto.soe.ucsc.edu/2006/09/13/the-story-of-meehans-tale-spin/
http://eliterature.org/2006/01/meehan-and-sacks-micro-talespin/
I like AI for some reason it makes me dream about all the possibility. Also one my favorite character is Kos-Mos(Androide for the one that don’t know her) so I guess I’m attracted to AI lol well I just really like pure logic.
Yeah. I think it would be pretty cool. Unfortunately, I also think it would look so weird and disorienting that it would never be implemented for a mass-market game like Elite Dangerous.
So you think it would be sickening in VR?
Well there is a lot to research for VR. I’m pretty sure all the effect used in current games probably can’t just be use as his in VR for reason like that.
Just take mincraft, when you get close to a portal and teleport to the nether, I’m use to it but it’s a strange feeling. Can’t say it’s a nice feeling and I wouldn’t stand on a portal for too long or I would definitely get sick fast.
No, I wasn’t considering VR at all actually. I don’t think it would trigger VR sickness, though. I think it would be confusing to people, since straight lines would look curved and everything would be warped.
Well if they have to move while the effect is active that is actually something i would like to see a video of.
I was chasing low prices I am afraid, and may not have chosen particularly wisely - not keeping up with this stuff, in the least… The vendor offers a bunch of “upgrade kits” – Turns out they had several of the parts in one of these on (I assume) clearance sale, so I ended up with more, for less than the kit, for the same components… So a B360-based Asus “Strix” motherboard, rather than one of the Z370-based ones (think I’ll do ok without a lot of USB ports, at least…); An i7-8700 with stock air cooler (the K model looks like a whole lot of watts going to heat, for little performance gain, to my untrained eye, and I do not mean to overclock), 16GB of Corsair “Vengence” DDR4 RAM, over two sticks; And a 500GB Samsung 970 EVO M2 SSD (here is where I feel I may have got more (from a strict storage space point of view, at least): the kit included a 256GB PM981 one).
Even if I could have done better, I believe it should be a step up, to the “first-I-ever-bought-prebuilt” machine I got back in anticipation of the Rift DK1. :7
…
It’s going to be interesting to hear what people think of DLSS: Jensen has talked up every previous new post-effect AA technique similarly, on respective launch, and, well… :9
I’ll note that for a lot of aspects of it - especially the reconstructive ones, he carefully consistently spoke in future tense…
Ah - the Discworld, where light moves like treacle… :9
There was actually a VR thing (might have been in DK1 days… I think it was), from some university, which had a bunch of relativistic scaling and effects, that you could swich on and off – rasterised, though. :7
Since I am planing on fixing a completely new build (my 8 years old i3-530 is no longer state of the art) for Pimax, I was just curious if you opted for Intel or AMD (I have been running Intel systems for past 20 years, but this is the first time I am seriously looking into an AMD build). Otherwise it will be similar in features to yours.
Aha - my first non-Amigas were Athlons. Good luck with your build.
That’s totally off. I think RT help improve and lift the heavy burden on graphics card. I mean look at Project Cars…those lights and reflection and particles are the real reason the game is slowing down. RT will help alleviate that…
Sure - in Project Cars 3. Or 4.