3D Archive
As we look to the future, maintaining a proprietary IR format (even one based on an open-source project) is counter to our commitments to open technologies, so Shader Model 7.0 will adopt SPIR-V as its interchange format. Over the next few years, we will be working to define a SPIR-V environment for Direct3D, and a set of SPIR-V extensions to support all of Direct3D’s current and future shader programming features through SPIR-V. This will allow developers to take better advantage of existing tools and unify the ecosystem around investing in one IR. ↫ Chris Bieneman and Cassie Hoef at the DirectX Developer Blog SPIR-V is developed by the Khronos Group and is an “intermediate language for parallel computing and graphics by Khronos Group”. I don’t know what any of this means, but any adoption of Khronos technologies is a good thing, especially by a heavyweight like Microsoft.
This article is a partial-rebuttal/partial-confirmation to KGOnTech’s Apple Vision Pro’s Optics Blurrier & Lower Contrast than Meta Quest 3, prompted by RoadToVR’s Quest 3 Has Higher Effective Resolution, So Why Does Everyone Think Vision Pro Looks Best? which cites KGOnTech. I suppose it’s a bit late, but it’s taken me a while to really get a good intuition for how visionOS renders frames, because there is a metric shitton of nuance and it’s unfortunately very, very easy to make mistakes when trying to quantify things. This post is divided into two parts: Variable Rasterization Rate (VRR) and how visionOS renders frames (including hard numbers for internal render resolutions and such), and a testbench demonstrating why photographing the visual clarity of Vision Pro (and probably future eye tracked headsets) may be more difficult than a DSLR pointed into the lenses (and how to detect the pitfalls if you try!). ↫ Shiny Quagsire I did it. I think I managed to find an article that isn’t just over my head, but also over most of your heads. How’s that feel?
Starting in the release 560 series, it will be recommended to use the open flavor of NVIDIA Linux Kernel Modules 119 wherever possible (Turing or later GPUs, or Ada or later when using GPU virtualization). ↫ NVIDIA developer forums Slowly but surely, NVIDIA is taking a more favourable position towards open source. It still feels surreal.
FuryGpu is a real hardware GPU implemented on a Xilinx Zynq UltraScale+ FPGA, built on a custom PCB and connected to the host computer using PCIe. Supporting hardware features equivalent to a high-end graphics card of the mid 1990s and a full modern Windows software driver stack, it can render real games of that era at beyond real-time frame rates. ↫ FuryGpu A really cool project, undertaking by a single person – who also wrote the Windows drivers for it, which was apparently the hardest part of the project, as the announcement blog post details. Another blog post explains how the texture units work.
The company raked in $13.5 billion in revenue since May, it revealed in its Q2 2024 earnings, with the unprecedented demand for its generative AI chips blowing past any difficulty it might have had selling desktop and laptop GPUs into a shrinking PC industry. Data center accounted for a record $10.32 billion of that revenue, more than doubling in just one quarter, and Nvidia made $6.188 billion in profit as a result — up 843 percent year over year. And while gaming is more than a billion dollars short of pandemic highs, it was actually up 22 percent year over year to $2.48 billion in revenue, too. I don’t really post about financial results anymore – the amounts of money “earned” by tech companies are obscene and utterly destructive – but I do want to highlight NVIDIA here, if only to be able to link back this a few years from now after the “AI” bubble has popped.
You can now play with NVIDIA GeForce graphics card BIOS like it’s 2013! Over the last decade, NVIDIA had effectively killed video BIOS modding by introducing BIOS signature checks. With GeForce 900-series “Maxwell,” the company added an on-die security processor on all its GPUs, codenamed “Falcon,” which among other things, prevents the GPU from booting with unauthorized firmware. OMGVflash by Veii; and NVflashk by Kefinator (forum names), are two independently developed new tools that let you flash almost any video BIOS onto almost any NVIDIA GeForce graphics card, bypassing “unbreakable” barriers NVIDIA put in place, such as BIOS signature checks; and vendor/device checks (cross-flashing). vBIOS signature check bypass works up to RTX 20-series “Turing” based GPUs, letting you modify the BIOS the way you want, while cross-flashing (sub-vendor ID check bypass) works even on the latest RTX 4090 “Ada.” No security is unbreakable. This will hopefully enable a lot of unlocking and safe performance boosts for artificially stunted cards.
The latest version of Intel Arc GPU Graphics Software introduced an interesting change that isn’t reflected in the Release Notes. The installer of the 101.4578 beta drivers add a “Compute Improvement Program” (CIP) component as part of the “typical” setup option that is enabled by default. Under the “custom” installer option that you have to activate manually, you get to select which components to install. The Compute Improvement Program can be unchecked here, to ensure data collection is disabled. The benignly named CIP is a data collection component that tracks your PC usage and performance in the background (not just that of the GPU), so Intel can use the data to improve its future products. Intel created a dedicated webpage that spells out what CIP is, and what its scope of data collection is; where is says that CIP “does not collect your name, email address, phone number, sensitive personal information, or physical location (except for country).” NVIDIA’s and AMD’s drivers also contain telemetry collection software, and only AMD tries to be as transparent as possible about it by offering a check box during installation, whereas Intel and NVIDIA hide it behind the “custom” option. Needless to say, Linux users don’t have to worry about this.
Recently we have been working on native Wayland support on Linux. Wayland is now enabled for daily builds and if all goes well, it will be enabled for Blender 3.4 release too. One of the major productivity applications adding Wayland support – especially one such as Blender – is a big deal.
Graphics card prices remain hugely inflated compared to a few years ago, but the good news is that things finally seem to be getting consistently better and not worse. This is good news. I don’t think I’ve ever experienced something like this before in my life, and I can’t wait for prices to truly reach sane levels again, as both my fiancée and I are due for an upgrade.
Taking NVIDIA into the next generation of server GPUs is the Hopper architecture. Named after computer science pioneer Grace Hopper, the Hopper architecture is a very significant, but also very NVIDIA update to the company’s ongoing family of GPU architectures. With the company’s efforts now solidly bifurcated into server and consumer GPU configurations, Hopper is NVIDIA doubling down on everything the company does well, and then building it even bigger than ever before. The kinds of toys us mere mortals rarely get to play with.
The Ampere graphics card was also supposed to be less attractive to miners, but it appears that the chipmaker shot itself in the foot and inadvertently posted a driver that unlocks mining performance on the RTX 3060. Meaning, anyone can unlock full mining performance with a minimum of effort. Well that was short-lived.
You may have noticed that it’s kind of hard to find any new graphics card as of late, since supplies are limited for a whole variety of reasons. For the launch of its upcoming RTX 3060 GPU, which might prove to be a relatively affordable and capable upgrade for many, NVIDIA is going to try and do something about the shortage – by crippling the card’s suitability for cryptominers. RTX 3060 software drivers are designed to detect specific attributes of the Ethereum cryptocurrency mining algorithm, and limit the hash rate, or cryptocurrency mining efficiency, by around 50 percent. To address the specific needs of Ethereum mining, we’re announcing the NVIDIA CMP, or, Cryptocurrency Mining Processor, product line for professional mining. CMP products — which don’t do graphics — are sold through authorized partners and optimized for the best mining performance and efficiency. They don’t meet the specifications required of a GeForce GPU and, thus, don’t impact the availability of GeForce GPUs to gamers. It’s a good first step, I guess, but I feel the market is so starved at the moment this will be a drop in the ocean.
With much anticipation and more than a few leaks, NVIDIA this morning is announcing the next generation of video cards, the GeForce RTX 30 series. Based upon the gaming and graphics variant of NVIDIA’s Ampere architecture and built on an optimized version of Samsung’s 8nm process, NVIDIA is touting the new cards as delivering some of their greatest gains ever in gaming performance. All the while, the latest generation of GeForce will also be coming with some new features to further set the cards apart from and ahead of NVIDIA’s Turing-based RTX 20 series. The first card out the door will be the GeForce RTX 3080. With NVIDIA touting upwards of 2x the performance of the RTX 2080, this card will go on sale on September 17th for $700. That will be followed up a week later by the even more powerful GeFoce RTX 3090, which hits the shelves September 24th for $1500. Finally, the RTX 3070, which is being positioned as more of a traditional sweet spot card, will arrive next month at $499. My GTX 1070 is still going strong, and I found the RTX 20xx range far too overpriced for the performance increase they delivered. At $499, though, the RTX 3070 looks like a pretty good deal, but it wouldn’t be the first time supplies will be low, and thus, prices will skyrocket.
While NVIDIA’s usual presentation efforts for the year were dashed by the current coronavirus outbreak, the company’s march towards developing and releasing newer products has continued unabated. To that end, at today’s now digital GPU Technology Conference 2020 keynote, the company and its CEO Jensen Huang are taking to the virtual stage to announce NVIDIA’s next-generation GPU architecture, Ampere, and the first products that will be using it. Don’t let the term GPU here fool you – this is for the extreme high-end, and the first product with this new GPU architecture will set you back a cool $199,000. Any consumer-oriented GPUs with this new architecture is at the very least a year away.
Blender, the open source 3D computer graphics software package, has released a major new version, Blender 2.80. Among other things, it sports a brand new user interface designed from the ground up, a new physically based real-time renderer, and much, much more. The 2.80 release is dedicated to everyone who has contributed to Blender. To the tirelessly devoted developers. To the artists inspiring them with demos. To the documentation writers. To the Blender Cloud subscribers. To the bug reporters. To the designers. To the Code Quest supporters. To the donators and to the members of the Development Fund. Blender is made by you. Thanks! I remember way back when, in the early 2000s, when people would adamantly state that professional software for fields such as image manipulation and 3D graphics would never be something the open source community could create or maintain. And here we are, almost two decades later, and Blender is a household name in its field, used for all kinds of big, megabudget projects, such as Marvel movies, Ubisoft games, by NASA, and countless others. Blender is a stunning success story.
Regrettably, there is little to read about the hardware invented around 1996 to improve 3D rendering and in particular id Software’s ground-breaking title. Within the architecture and design of these pieces of silicon lies the story of a technological duel between Rendition’s V1000 and 3dfx Interactive’s Voodoo. With the release of vQuake in early December 1996, Rendition seemed to have taken the advantage. The V1000 was the first card able to run Quake with an hardware acceleration claiming a 25 Mpixel/s fill-rate. Just in time for Christmas, the marketing coup allowed players to run the game at a higher resolution with a higher framerate and 16-bit colors. But as history would have it, a flaw in the design of the Vérité 1000 was to be deadly for the innovative company. I had never heard of Rendition or its V1000, and this story illustrates why. An absolutely fascinating and detailed read, and be sure to also read the follow-up article, which dives into the 3Dfx Voodoo 1 and Quake.
AnandTech has published its review of AMD’s surprise new high-end Radeon VII graphics card, and the results should be cause for some cautious optimism among PC builders. Overall then, the Radeon VII puts its best foot forward when it offers itself as a high-VRAM prosumer card for gaming content creators. And at its $699 price point, that’s not a bad place to occupy. However for pure gamers, it’s a little too difficult to suggest this card instead of NVIDIA’s better performing GeForce RTX 2080. So where does this leave AMD? Fortunately for the Radeon rebels, their situation is improved even if the overall competitive landscape hasn’t been significantly changed. It’s not a win for AMD, but being able to compete with NVIDIA at this level means just that: AMD is still competitive. They can compete on performance, and thanks to Vega 20 they have a new slew of compute features to work with. It’s going to win AMD business today, and it’s going to help prepare AMD for tomorrow for the next phase that is Navi. It’s still an uphill battle, but with Radeon VII and Vega 20, AMD is now one more step up that hill. While not a slam-dunk, the Radeon VII definitely shows AMD can get at least close to NVIDIA’s RTX cards, and that should make all of us quite happy – NVIDIA has had this market to itself for far too long, and it’s showing in the arrogant pricing the company maintains. While neither RTX cards nor this new Radeon VII make me want to replace my GTX 1070 – and its custom watercooling parts – it at least makes me hopeful that the coming years will be more competitive.
Ars Technica writes: On Monday, Nvidia took the unusual step of offering a revised Q4 2019 financial estimate ahead of its scheduled disclosure on February 14. The reason: Nvidia had already predicted low revenue numbers, and the hardware producer is already confident that its low estimate was still too high. The original quarterly revenue estimate of $2.7 billion has since dropped to $2.2 billion, a change of roughly 19 percent. A few new data points factor into that revision. The biggest consumer-facing issue, according to Nvidia, is “lower than expected” sales of its RTX line of new graphics cards. This series, full of proprietary technologies like a dedicated raytracing processor, kicked off in September 2018 with the $1,199 RTX 2080 Ti and the $799 RTX 2080. The RTX launch was bungled, and the cryptocurrency hype is way past its prime. It’s not a surprise Nvidia is going to experience a rough year.
FreeSync support is coming to Nvidia; at its CES event today, Nvidia announced the GSync-Compatible program, wherein it says it will test monitors that support the VESA DisplayPort Adaptive-Sync protocol to ascertain whether they deliver a “baseline experience” comparable to a GSync monitor. Coincidentally, AMD’s FreeSync utilizes the same VESA-developed implementation, meaning that several FreeSync-certified monitors will now be compatible with Nvidia’s 10- and 20-series GPUs. This is great news, since GSync support requires additional hardware and this increases prices; you’ll find that the GSync version of a display is always significantly more expensive than the FreeSync version.
NVIDIA is proud to announce PhysX SDK 4.0, available on December 20, 2018. The engine has been upgraded to provide industrial grade simulation quality at game simulation performance. In addition, PhysX SDK has gone open source, starting today with version 3.4! It is available under the simple 3-Clause BSD license. With access to the source code, developers can debug, customize and extend the PhysX SDK as they see fit.
I'm not well-versed enough in this area to gauge how big of a deal this news it, but regardless, it seems like a good contribution to the open source community.