Unity demos AI-driven cyberpunk game showing why machines aren't about to replace game...

Cal Jeffrey

Posts: 4,181   +1,427
Staff member
In context: Convai Technologies is a startup geared toward developing conversational AI avatars for the virtual world. The characters are capable of carrying on real-time open-ended conversations and independent actions. They can perform host or guide functions in the metaverse or developers can use them as NPCs in a game – or so the company claims.

Convai (pronounced convey) partnered with Unity on a game called Project Neural Nexus. It's a cyberpunk-themed first-person shooter with all the dystopian tropes you would expect from the genre game, including a neon-infused locale uncreatively named "Neo City" and clothing that appears to be ripped straight out of Cyberpunk 2077.

Naturally, the game makes heavy use of Convai's AI NPCs. The entire project appears to be not much more than an interactive advertisement or proof of concept for Convai's "smart NPC" platform. It's an interesting application of conversational AI in that you can presumably strike up an unscripted conversation with any NPC in the game, but that's about as far as the novelty goes.

Judging by the trailer Convai released yesterday (masthead), Project Neural Nexus will not knock anybody's socks off. As mentioned, the visuals are appropriate for the dystopian cyberpunk setting and the game supposedly use real-time ray tracing. However, the appearance is dull and washed out, contradicting the typically vibrant cyberpunk traditions. Instead of looking bright and gritty, it looks foggy and overexposed.

The worst is those unscripted, real-time conversations. Every line of dialogue will remind you that you are talking to a soulless machine. The voices are natural sounding enough. They don't have that hesitant robotic quality like the character in the introductory reel above. The problem is the delivery is flat and emotionless. It sounds like bored developers sitting around the office reading lines to each other.

Combat looks janky, too. We see a mech leap to attack a soldier, only to get stuck in the air before the scene cuts. Another mech pounds on some video signage that explodes with too much force and pyrotechnics than one would expect from an electric billboard. Instead of blue sparks and smoke, it looks like the mech punctured a gasoline tank.

Considering it's a more or less tech demo, Project Neural Nexus does have some merit. With further advancements, it could potentially deliver more convincing voice acting. Competent developers could also improve the visuals and gameplay. However, as a platform for automated world-building, let's just say that studios better hang on to their scriptwriters. Having meaningless conversations with NPCs seems fun for about 5 minutes. The tech needs to bake a bit more.

Permalink to story.

 
Very monotone dialogue... and crappy dialogue. You can tell its AI driven.
Still has a long long ways to go
 
It sounds like a neat proof of concept. AI has a lot of potential to sort of work around the edges and handle the 20 in the 80/20 rule scenario for game development.

Humans make 80% of the game, assets, etc, and the AI can come in and assist or fully handle the difficult 20% and add some neat, unscripted, emergent properties to say an open world that just wouldn't be in the budget otherwise.

Like anything though, once a tool becomes a crutch (think upscaling and frame generation), it will slowly become the norm for the sake of efficiency.
 
We're probably 10 years away from procedurally create content matching hand craft experiences. Imagine if you could play skyrim for several thousand hours and have a unique experience everytime.
 
I chalk this more up to poor execution than lack of AI as a tool for better dialog. This demo is just kind of poorly done all around.

If I can make an LLM talk like a pirate or a cat, then clearly you can make dialog better than this. Also, I think the translation from text to voice is really the weak spot here. The "text-to-speech translator" does not know how to properly emphasis words or how to adjust its tone given the situation. That is not a tech topic I have seen much work focused on. There is also the real-time lip syncing that needs a lot of work.
 
We're probably 10 years away from procedurally create content matching hand craft experiences. Imagine if you could play skyrim for several thousand hours and have a unique experience everytime.

If this were the case then developers wouldn't be making money on a game you could sink thousands of hours into if they sold you a game key for $60 and that is it.

They would need to make the game subscription based or have the game be only available on a streaming service where you're basically paying a subscription to play it (only difference here between a sub base payment plan for a single game, you'd have access to other games in the streaming service's library). Or go the route of microtransactions up the wazoo to keep a revenue stream going.
 
Imo

Are unscripted conversations with NPC powered by an LLM really what gamers want?

Definitely not in current form. I want to see the final product. The NPCs should sound far away from sounding like ai bots and at least attempt to sound natural if they want gamers to accept it. Less monotone and more expression like the Metahuman had. The expression should match the topic and not overly expressed. The ai has a disconnect with feeling that the artist wants the user to experience. This is why generative ai will still need human artist to paint a more palatable conversion that is relatable, and enhances your senses. The target demographic are humans who feel and need a stimulus that currently a creative human artist can hone in to perfection.
If this was presented as is I would reject this. Does it have potential as a tool in the right artists hands? Yes I am optimistic it does.
 
I am more interested if it could be used to make NPCs not talk, but BEHAVE LIKE HUMANS!
I mean move, look around, sneeze, rubbing a nose, and simply do all the small things real human beings do.
That would be a killer feature even lacking top notch graphics.
I want AI to be given a human doll that has a wide range of limb movements and turns and helped it to control it like it is a human being. That would be epic.
 
It is a good idea for giving some life to the environment, but not to create gameplay.

Just walking around in the city and having situations created by AI make sense, but driving your main story or your side quests with AI is a mistake.
 
Back