According to GameSpot, Lies of P publisher Neowiz is actively exploring AI to boost its business. Co-CEO Sean Kim told Game Informer that in Korea, where Neowiz is based, it’s hard to find a game company not using AI tools like ChatGPT or Gemini. Kim stated the primary focus is using AI for R&D to automate routine operational tasks, such as test case processing and analyzing large data volumes. The stated goal is to let teams work faster and focus on refining core strategies to maximize player engagement. Kim explained that processing data faster should lead to better strategic decisions on supporting developers and engaging the community. This follows similar sentiments from studios like Larian, which wants AI to handle tasks nobody wants to do.
The Boring AI Reality
Here’s the thing: when executives talk about AI in games, we often jump to images of ChatGPT writing dialogue or Midjourney generating concept art. But what Neowiz is describing is way more… corporate. It’s about operational efficiency. Processing test cases, sifting through player data logs, automating reports. Basically, the unsexy backend stuff that makes a publishing machine run. And you know what? That’s probably where the most immediate, tangible impact of AI will be for a lot of companies. It’s less about creating content and more about analyzing it at a scale humans can’t match. The promise is that freeing up human hours from spreadsheets means those hours can go into something creative. But is that how it actually plays out?
The Engagement Machine
That phrase “maximize player engagement” is doing a lot of heavy lifting. It sounds positive, right? Better decisions for the community! But in publisher-speak, “engagement” is a cold metric. It’s playtime, retention, monetization windows. Using AI to parse “large volumes of operational data” faster means they can more quickly identify what’s keeping players hooked—or what’s driving them away—and adjust live-service tactics, store rotations, or difficulty curves in real-time. It’s about optimizing the product, not necessarily enriching the artistic vision. This is the natural evolution of the data-driven game-as-a-service model, just with a more powerful engine. The question becomes: when you algorithmically maximize for engagement, what do you lose?
The Shadow of Replacement
And this is where the conversation gets uncomfortable. Kim’s comments are framed entirely around augmentation—helping teams work better. But the industry is already showing a darker side. Look at the reports from Candy Crush maker King, where laid-off staff claimed they were being replaced by the AI tools they helped build. Or EA reportedly looking to AI to “ramp up development.” The line between “automating routine tasks” and eliminating the jobs of those who perform those tasks is incredibly blurry. A publisher can say they’re freeing up developers for creative work, but what about the QA testers, the data analysts, the community managers whose “routine” work is being automated? Their “refined core strategy” might not include those roles at all.
A Tool Is Only As Good
So, is Neowiz wrong to explore this? Not inherently. Faster data processing can lead to genuine improvements. Spotting a bug trend early or understanding what players love about a weapon can make a game better. But the faith that AI-derived insights will automatically lead to “better, more informed strategic decisions” is a leap. AI identifies correlations and patterns in the data it’s fed; it doesn’t understand nuance, heart, or the inexplicable magic that makes a game memorable. You can optimize a game to be sticky, but you can’t algorithmically create a Lies of P. The risk is that in the pursuit of maximizing engagement through data, publishers start trusting the machine’s output more than their team’s instinct. And that’s a strategy that might just backfire.
