Crytek’s ray-tracing technology will operate on current-generation graphics cards.
Crytek’s ray-tracing technology will operate on current-generation graphics cards.
A new video debuted today, March 15th, on OC3D.net, revealing visuals many of us anticipated from Nvidia’s RTX showcase – previously considered unattainable. The footage displays exceptional real-time ray tracing reflections achieved on a standard AMD Vega 56 graphics card.
Notably, no additional hardware was needed. It remains unclear when Crytek will implement this technology within their engine or develop an actual game utilizing it; however, the demonstration is undeniably intriguing. A significant consideration is Crytek’s historical challenges with licensing their engine.
https://www.overclock3d.net/news/...aced..._vega_56/1
Neon Noir
The outcome remains uncertain. Although the somewhat deceptive RTX presentation followed by lengthy delays in ray tracing processing, Crytek hasn’t indicated any developers are interested in utilizing this new engine version, much less possessing a completed build with this technology.
It's understandable to believe that Cevat Yerli’s public relations failure – his inability to compensate his developers while maintaining an extravagant lifestyle – has negatively impacted both his supporters and potential developers considering licensing his engine.
Nevertheless, the primary reasons CryEngine sees limited usage may be its licensing costs and the complexity of mastering it effectively. From all the CryEngine games I've observed, only those developed by Crytek appear to fully leverage the engine’s capabilities.
I still consider this video demonstration a significant achievement if its claims about genuine real-time ray tracing through a game engine are accurate. However, it would require Crytek returning to active game development to demonstrate its full potential, and I seriously doubt they could convince skilled developers to trust them financially.
I don’t have time to view the video presently, but does it clarify the process at all?
Specifically, NVIDIA allocated specific resources for its implementation. If this relies on existing card capabilities, what other element is being neglected so those assets can be utilized for this feature? It appears a compromise must be reached to allocate these resources – therefore, this adds elaborate detailing or whatever it incorporates, and sacrifices what? Perhaps the natural movement of branches?
I’m simply inquiring because I understand a trade-off is required to achieve this. As someone not heavily involved in gaming, I'm unfamiliar with the specifics of ray tracing and its impact on game visuals.
Given assertions about not requiring dedicated graphics cards like those featuring RTX, ray tracing would likely need to occur within the engine itself; however, this also suggests an emphasis on utilizing multi-threading APIs such as Vulkan and Dx12.
The primary concern for me is whether the hardware implementation (RTX) surpasses the engine/API approach. Typically, one would anticipate a hardware advantage, as demonstrated by technologies like ShadowPlay, but it remains uncertain due to Nvidia’s limited optimization results with RTX thus far.
It's possible this represents a last-minute attempt by Crytek to boost their engine’s appeal considering the minimal developer interest; in that scenario, it might function as a “basic ray tracing capability accessible on current-generation GPUs” rather than replicating everything offered by RTX.
Dedicated hardware offers a performance advantage, and it’s intriguing to observe the extent of this variation. A comparison between a 1660Ti and a 2060 would be beneficial in this demonstration. We already possess a solid understanding of the differences when ray tracing isn't involved. Furthermore, given that the core architecture remains consistent, this test should provide valuable insight into our future trajectory.