Nvidia's Neural Textures Are a VRAM Game Changer

Nvidia’s Neural Texture Compression technology could be the answer to our VRAM woes, as it can help reduce the VRAM required to store the most demanding of textures by up to 95%. This could make lower-VRAM graphics cards more viable in the future, potentially increasing the lifespan of more modest graphics cards as new games place ever-increasing demands on PC gaming hardware.
VRAM saturation on modern graphics cards has been a looming problem in PC gaming for a number of years. As the visual fidelity of modern games improves, they typically require more graphics memory to load textures and other game elements for quick access to the GPU. That’s led some games to demand more than 8GB to even run at all, locking out older and lower-end graphics cards from playing. Some AAA games are even starting to need more than 16GB of VRAM for a 4K ray-traced experience, making some top-tier GPUs from recent years fall short.
Although ever-increasing VRAM quantities on graphics cards is one solution, Nvidia’s Neural Texture Compression technology could help alleviate these problems for existing GPUs. Previewed at CES alongside the new RTX 5090 and 5080 graphics cards, Neural Texture Compression utilizes Nvidia’s tensor cores to infer what textures look like, rather than rendering them in their entirety.
The demo that Nvidia has out for this technology now gives you three modes to play with. The first is just native rendering with no neural compression applied. The second, NTC transcoded to BCn, or as Nvidia calls it, “Inference on load,” is where textures are compressed using the neural network, but then converted to more traditional block compression techniques. This cuts back on disk space, potentially reducing game download and install sizes, but has less of an impact on VRAM. The third only renders based on the texture samples for the specific viewpoint. That’s what helps reduce VRAM requirements so drastically.
The only downside is that it does impact performance. In PCWorld’s testing, it found that using the most advanced technique did indeed cut VRAM usage dramatically, but it also dropped performance by close to 20% at 4K. Using the more modest Inference on Load system, however, reduced VRAM use by just under 65% and only had a modest effect on performance.
Nvidia recommends the Inference on Load technique for midrange and older graphics cards but suggests that newer and high-end cards should work just fine with the more advanced neural compression technique. As the video above suggests, though, there is little to no impact on visual fidelity in either case, suggesting that this technique, even if implemented modestly, could have an important impact on game optimization in the future.
At the very least, having it as a toggle to allow gamers with lower VRAM-quantity graphics cards to play newer games, would be a real benefit. No word yet on whether AMD or Intel are working on something similar, but I wouldn’t be surprised.
© 2001-2025 Ziff Davis, LLC., a Ziff Davis company. All Rights Reserved.
ExtremeTech is a federally registered trademark of Ziff Davis, LLC and may not be used by third parties without explicit permission. The display of third-party trademarks and trade names on this site does not necessarily indicate any affiliation or the endorsement of ExtremeTech. If you click an affiliate link and buy a product or service, we may be paid a fee by that merchant.

source

About admin

Check Also

Could Gravitational Waves Be Detectable With a Single Atom?

A new paper from Stockholm University lays out an intriguing idea: What if the spontaneous …

Leave a Reply

Your email address will not be published. Required fields are marked *

Bizwhiznetwork Consultation