Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Nintendo Games

Nintendo To Use New Nvidia Graphics Chip in 2021 Switch Upgrade (bloomberg.com) 44

Nintendo plans to adopt an upgraded Nvidia chip with better graphics and processing for a new Switch model planned for the year-end shopping season, Bloomberg reported Tuesday, citing people familiar with the matter. From the report: The new Switch iteration will support Nvidia's Deep Learning Super Sampling, or DLSS, a novel rendering technology that uses artificial intelligence to deliver higher-fidelity graphics more efficiently. That will allow the console, which is also set for an OLED display upgrade, to reproduce game visuals at 4K quality when plugged into a TV, said the people, who asked not to be identified because the plan is not public. The U.S. company's new chipset will also bring a better CPU and increased memory. DLSS support will require new code to be added to games, so it'll primarily be used to improve graphics on upcoming titles, said the people, including multiple game developers. Bloomberg News previously reported that the new Switch is likely to include a 7-inch OLED screen from Samsung Display and couple the console's release with a bounty of new games.
This discussion has been archived. No new comments can be posted.

Nintendo To Use New Nvidia Graphics Chip in 2021 Switch Upgrade

Comments Filter:
  • Seems to me that most of the time this thing is going to have way more power than it needs. Careful what the real owners of the hardware you bought do with it.
    • The article says 4K when plugged into TV
    • More power than it needs? I assume you think this will render photorealistic 3D graphics (ie it will have 50 times the GPU power of an RTX3090) and have compute power left over?

    • by alvinrod ( 889928 ) on Tuesday March 23, 2021 @01:42PM (#61190066)
      It's not really the same as giving you 4K quality. It's just using machine learning to take the low resolution image it natively renders and convert it to a higher resolution target without it being too obviously worse. The Nintendo Switch has a 720p screen, which means that to upscale it to a 4K picture, you'd just need to use a 3x scaling factor, but that's not going to look good in much the same way plugging an old NES into an HD (1080p) display just gave you larger blocky graphics.

      The idea here is that you use a trained neural network to do the conversion and instead of just scaling it 3x, produce a new image that gives the illusion of something that was natively rendered in 4K. Obviously there are limits to how good or close you can get with such techniques, but add in the motion of game play and that you won't really notice anything wrong if it's not in your direct focus, it likely produces a better image than just up-scaling would.
      • In docked mode, the current version plays most games at 1080p, so only a 2x upscale is necessary. They will be running a custom tuned DLSS for each game as well, which is mentioned in TFA. I wonder if they'll have different filters for different screens, like GUI sprite text inventory mgmt screen vs. a rendered battle.

        • by Saffaya ( 702234 )

          "the current version plays most games at 1080p, so only a 2x upscale is necessary"

          You mispelled 4x upscale, just saying.
          And that 1080p isn't 60fps for a lot of games.

          Still, a good news that Nintendo is considering an upgrade to their el cheapo hardware.
          It is always sad to see such great Switch games saddled by an anemic console.

      • especially the current gen version. It's pretty impressive. I haven't got a 4k monitor or TV, but folks who do say it's hard to tell the difference. And the PS4/5 and XBone have been doing something similar with various dynamic renderers for ages.
      • it likely produces a better image than just up-scaling would.

        One of the most interesting things about DLSS, while it largely markets itself as high performance upscaling looking nearly as good as native resolution without the performance hit, there are many examples of components that make up a scene actually looking *better* than they would natively rendered.

        This is largely restricted to things difficult to deal with at native resolution such as aliasing effects, but it was actually kind of mindblowing to really compare some scenes side by side. You have grass or tr

      • I completely resolved my question when I read this post, thanks to the author for the very detailed description. I wrote my review on the brillassignment review [ukbestessays.org], you can go in and read. Thank you very much for your attention in your time.
    • by Enti ( 726249 )
      Somewhat the opposite. DLSS allows underpowered systems to render at lower resolutions as needed, then more convincingly upscale to a higher resolution (in this case, 4K). It's a good choice for the Switch, which isn't exactly trying to be a PS5, hardware-wise
    • by Anonymous Coward

      7" is too big. Crappy joysticks. Ffs, use magnetic gimbals like a good RC controller. Sony needs to do a PS Vita successor.

      • Getting joysticks with a radio range longer than one metre would be a great improvement. It is very annoying if you are 1.5 metres from the base and your character just starts running left over a cliff, or takes half a second or a second to respond to a button press on the left controller.
        • You need to examine your joycons or local environment. Mine work reliably at 4 meters and the spec sheet shows they are rated for 10 meters. [tweaktown.com]

          The joycons work in the Bluetooth frequency so perhaps you have other BT devices in the area that are degrading your joycon signal.
      • by Saffaya ( 702234 )

        SONY could have had the Switch before Nintendo if they hadn't been blinded by their greed of forcing players to buy TWO different hardware for the same games:

        Neutering the VITA by removing its TV port, and introducing another console, the non-portable Vita TV, if you wanted to play Vita games on a TV.
        All this while the previous generation, the PSP, had a TV port.

        Speaking of which, the PSP: the portable console that needs to spin discs for reading games. How stupidly retarded it is, is no match against SONY'

    • Seems to me that most of the time this thing is going to have way more power than it needs

      Not if they're having to rely on the fancy upscaling they called 'DLSS."

    • Nintendo would still like to see ports of games like Doom Eternal. 30fps is enough but the console still needs to hit that.

      Also, don't forget the less power you have the more work you have to put into optimizing. The Wii U was a nightmare for optimization thanks to an underclocked CPU with extra cache meant to make up for it (the CPU was kept underclocked so they fan wouldn't bother Japanese housewives, no joke).

      Mighty No 9 is a good example of this. They insisted on a Wii U port since that was the
      • by Saffaya ( 702234 )

        Are you saying 30fps is enough for a fast-action First Person Shooter like Doom Eternal? Peculiar.

    • Seems to me that most of the time this thing is going to have way more power than it needs.

      I see you don't own a Switch. There are many Switch titles that currently are performance limited by the GPU and specifically its ability to scale to an external display.

  • How one would use"artificial intelligence to deliver higher-fidelity graphics more efficiently". Or is this just more marketing flim flam?
    • Because its using upscaling - its not 4k native.
      • What is the artificial intelligence part? It's just choosing from three or four different ways to do it? Is it more intelligent than say, a German Shepherd?
        • Is it more intelligent than say, a German Shepherd?

          If you have an AI as intelligent as a German Shepherd, you need to get it to market - That would easily be a trillion dollar business.

        • It's not really just simple up-scaling where you'd take a 720p source image and just blow it up to be three times as large so that it neatly fits onto a 4K screen, but rather creating both 720p and 4K renders of the gameplay and training a neural network to take the 720p render as input and produce the 4K render as output. Obviously it can't get it perfect, but it just needs to be good enough that humans will have difficulty telling the difference while the game is running. The training for this is done by
    • It has to do with knowing which parts of the scene need more iterations or more detail to render. Say you have a screen that is nothing but black, then you don't need much rendering. You can think of it like the inverse problem of howing to know if a picture is rendered or actually a photo. People who look at fakes enough, know where exactly to look to determine it's fake. Likewise AI can be used to provide further rendering in those areas. In PC gaming FPS drops can be considered more accept. My laptop shi

      • You know, I was just thinking last night that I should try generating my own Mandelbrot. There just doesn't seem enough hours in the day.
        • Pretty much every programmer for generations had written a mandelbrot generator, and then yours came along.
          • It's a good exercise to write one but simply looking at the code and playing with it to generate zooms or anything similar is also a good exercise.

            It's weird for me to think I am much older than this poster, so it probably comes down to how a lot of universities focus less on theory now and more on practical java.

            • I think what the problem is, is that it really is a much bigger step than it used to be to get into programming.

              Once upon a time, it was all simpler, and every computer came with at least one programming guide. A user was never more than 20 or so keystrokes away from drawing their own graphics on the screen. Thats once upon a time.

              Now there are 5 abstraction layers you have to deal with to do the same thing, and if you have never programmed before.... you not only have to learn all 5 of them simultaneou
        • Writing one in BASIC on an 8-bit computer back as a kid in the 1980s sure was not the most efficient method! But a fun way to spend a couple of hours, but some people took it way further e.g. fractint. As for the scaling, is DLSS much different than throwing the image through a few layers of a neural network, i.e. a few giant matrix multiplies separated by non-linear functions to prevent them from collapsing? That process makes the effort a fixed amount dependant on the number of screen pixels (and the DL m
      • I don't think that's how DLSS [wikipedia.org] works.

        My impression is that a deep learning algorithm is trained to recognize an object in the scene from a bunch of different angles at low resolution and then replace it with a higher resolution version rendered beforehand by much better hardware. Then the low-end hardware feeds both the low-res scene and the motion vectors for all objects in the scene into the trained algorithm, which then replaces a bunch of the low-res stuff with higher-res stuff from its "library". The

        • My point was rather to address the OPs question of how AI can be used to find patterns to improve graphics, not this specific method. Thanks for the additional information.

          This makes a lot of sense and when the switch is done in the rendering pipeline, that seems like it could help avoid a lot of potential issues with mis-identifying the object. Then again though it seems like you could solve this in the case you mention without needing AI because I am unsure why the rendering pipeline has to guess the obje

  • Is Nintendo going to go with full backward compatibility or not? If not, they're going to be hurting because that's what their competitors provide and end of year 2022 is likely to be a very bad time economically to be telling people that your new console is so special that it's the only one that won't play previous generation titles.

    • It sounds like they will, in the same way that the DLSS capable RTX 2070 plays older games the same way a non-DLSS capable GTX 1080 would. And in newer games that offer offer the feature of DLSS, the GTX 1080 can run it while not having to concern itself with that feature. Afaik, games get developed pretty much the same as always, and DLSS gets added in afterwards. The difference now is that Nintendo is telling developers to start off with 4K textures. Sometimes when an older PC game gets remastered, the
  • Admittedly it's been years since I've looked, but I'd heard they didn't last long. About 3-5 years for early models (vs LCDs I've got that are pushing 15 and still work prefect).
    • by Xenx ( 2211586 )
      Obviously, it'll depend upon the actual OLED display and the brightness level it's used at. But, I'm seeing 40,000 - 50,000hrs a couple years ago. I've seen higher claims of 100,000 for some OLED TVs. Using the lower numbers, 4-5 years of continuous use seems about right. However, I also saw in at least one case the 40k life was at 25% brightness and 100% brightness dropped it to 10k life.
  • If this extra processing capability is enough to improve frame-rate issues in a number of games, then 4K is just a nice added bonus for those who have TVs that support it. I do feel that getting 4K without solving frame-rate issues would feel wrong, since upscaling can make up for some element of lower resolution, but there isn't much of a workaround for lo frame-rates.

"Gotcha, you snot-necked weenies!" -- Post Bros. Comics

Working...