When you read “artificial intelligence,” you might think of new innovations such as Chat GPT, but AI has been used in video games since the 1950s.
From Pacman’s trademark ghosts to autonomous decision-making in “The Sims,” AI is essential for things like creating adaptive characters and storylines. Now, the rapid development of generative AI is opening up a new frontier for video games: endless open worlds, unique content, autonomous characters, and the potential for faster game development.
Act natural
Generative AI – artificial intelligence that creates text, images and audio in response to a prompt – is set to shake up one of the signature components of video games: non-playable characters, known as NPCs. These characters typically have a set pattern of behavior, and their mannerisms and speech are often stilted and unnatural.
“When we think about those NPCs, they look a bit weird,” says Alexis Rolland, development director of La Forge China, the Chinese branch of video games publisher Ubisoft’s research and development unit. “You can tell there is something off about what you’re seeing or hearing.”
Enter generative AI. Earlier this year, La Forge launched Ghostwriter, a text-generating AI tool designed to help writers create a greater variety of original dialogue for NPCs, and in 2022, it tested a new technology that helps to generate more realistic and natural gestures in NPCs that matches the tone and mood of their speech.
“It takes speech as input and generates body gesture as output, so we can imagine those NPCs expressing themselves with non-scripted dialogues, having almost natural body animation, synthesized from speech,” says Rolland.
Combining generative AI elements like dialogue and animation could create “a fully fledged, AI-powered NPC that might have a little bit more natural and unpredictable behavior,” says Rolland.
Jitao Zhou, a student at Rikkyo University in Tokyo, is doing just that to generate more realistic NPCs that are smarter and less predictable.
“This NPC, using deep learning, does not have a fixed pattern, so can have a greater variety of movements,” says Zhou, adding that smarter NPCs will make games more entertaining and challenging.
Some publishers are already employing generative AI in NPCs to make conversations more realistic. Chinese gaming company NetEase uses ChatGPT to generate NPC dialogue in its recently released “Justice” mobile game, while Replica Studios recently introduced “AI-powered smart NPCs” for game engine giant Unreal Engine, which allows game developers to use AI to read NPC dialogue, rather than hiring a human voice actor.
However, one risk of using AI-generated NPCs is that game designers could lose control of the game narrative, says Julian Togelius, an associate professor at New York University, where he conducts research on AI and games. “(NPCs) may say stuff that is game-breaking or rude or breaks immersion,” he explains.
Creating “good” AI-generated NPCs that help the player is also far more difficult than creating enemies that fight against you, Togelius adds. “We haven’t seen that much advancement in the artificial intelligence that powers the other characters in the games, or that tries to model the player, or tries to generate the world — so we’re going to see a lot of advancement along these directions.”
Open worlds
Open-world games, like “Grand Theft Auto,” “Skyrim,” and “Elden Ring,” approach gameplay with non-linear quests and stories. That offers another opportunity for generative AI to reshape the gaming experience.
Togelius envisions a “huge open game” with infinite opportunities, new cities, landscapes, and people, each with its own backstory and interactive elements. By using data about the player gathered from previous gameplay, generative AI could create unique storylines and tailor-made quests for players pitched at just the right level – “like a personalized dungeon master,” he adds.
Some research is already being done in this area. Takehiko Hoshino, also a student at Rokkyo University, has created an AI tool that he’s teaching to generate its own mazes and dungeons one square at a time, based on previous ones that it has encountered.
It’s still in the early stages of development, but Hoshino says the next step is to embellish the maze with features including “treasure chests, enemy characters, and other game-like features, such as traps and other tiles.”
Near-infinite open worlds are already possible to some degree: “No Man’s Sky” (2016) is a “virtually endless game,” says Togelius, that uses a technique called procedurally generated content to create customized fauna, flora, geology, and atmospheric conditions for its planets — of which there are 18 quintillion unique variations.
To the average person, procedural content generation looks a lot like generative AI, says Togelius. But it uses algorithms that generate content based on predefined rules, based on data input by the game developer.
While the developers retain control over procedurally generated content, generative AI has the potential to develop unplayable levels, or deviate in unintended ways from the game’s narrative. “Games have functional constraints such that levels need to be completable and NPCs should not lie about the game world,” Togelius adds.
But generative AI could impact the game world in other ways.
Players are already adapting gameplay to their own preferences and needs with user-generated content (UGC), which is a key component of many games, including “Fortnite,” “Minecraft” and “The Sims.” Generative AI could make the production of UGC easier and more accessible for players, as well as raise the quality of content.
“Generative AI has the potential to allow for a much broader and more emergent set of personalized and reactive player experiences,” a spokesperson from Maxis, the developer behind “The Sims,” told CNN in an email.
“Player customization today is limited by the complexity of tools and (user experience) that we can expose players to, but some of the new models can make it easier for the game to interpret and respond to what the player wants to do,” says Maxis.
An additional tool
While gamers are excited about the potential for gameplay, generative AI is likely to impact development before it alters the user experience.
Maxis is developing generative AI tools, currently in varying stages of maturity, that can support game designers, eliminating repetitive tasks and enabling developers to work on more interesting problems, according to a spokesperson.
At La Forge, generative AI tools like Ghostwriter or ZooBuilder, a 3D animator that animates four-legged animals based on videos, could help designers “accelerate the most tedious part of their process, so that they can really focus on the more creative and interesting parts,” says Rolland.
Creatives across all kinds of industries have raised concerns about generative AI taking their jobs but Rolland is quick to add that this new technology won’t replace human game developers. Animators faced a similar existential threat with the advent of motion capture, which Rolland says did not actually impact jobs but became a tool to create better graphics.
“We’ve never had as many animators as we have today, and we still need more. Motion capture really became part of their workflow as an additional tool,” says Rolland. “I think with generative AI, it’s essentially the same thing – or at least, we’re approaching it with the exact same mindset here at Ubisoft.” However, there are still a lot of unanswered “legal and ethical aspects” to using generative AI, including artists’ copyright, he adds.
La Forge is eager to explore the opportunities, such as the potential to increase iteration speed, and independent game designers are going to benefit from that too, he adds. “This technology becoming increasingly accessible is going to empower a lot of the smaller studios to produce games and scale their productions, and maybe reach higher quality than what they would have had without generative AI.”
The generative AI Ghostwriter and speech-to-gesture animation have “passed the simple prototype stage,” says Rolland, and La Forge is now exploring how these technologies would work in the development pipeline.
“Video games are in for quite a trip in the next decade or two,” says Togelius. “It requires us to change the way we’re thinking when it comes to game design, but I think that when it happens, games are just going to get so much better.”
— CutC by cnn.com