As the company describes, the Nvidia ACE For Games itself is a suite of middleware, which includes the generative AI toolkit NeMo, the automatic speech recognition (ASR) and text-to-speech (TTS) combo known as Riva. Rounding it us is the Omniverse Audio2Face tech which matches the facial expression of an NPC to match the spoken dialogue. The company specifically mentions compatibility with Unreal Engine 5 and the MetaHuman character creator. There’s also a demo video showing the Nvidia ACE For Games in action, though to be fair, a scripted showcase doesn’t really do much to show off the possibilities with generative AI. As it is, beyond the apparent voice input of the player character’s line, the dialogue is not that much different compared to contemporary dialogue choices in games. What it does show though, especially for the nitpicky crowd, are its imperfections. The company says that the tech allows for “accurate facial animations and expressions, all in your native tongue”. In the case of this showcase video, the native language is obviously American English, so we get the common overpronunciation of the name Kumon Aoki. According to The Verge, Nvidia VP of GeForce Platform Jason Paul says that ACE For Games can scale to more than one character at a time, as well as let NPCs talk to each other. Theoretically anyway, since Paul also says that he hasn’t actually seen that being tested. While that can be amazing for immersion in RPGs with massive worlds, one can already imagine the kind of uncanny valley sensations that would elicit. (Source: Nvidia, The Verge)

Nvidia ACE For Games Gives Video Game NPCs Generative AI Capabilities - 58