Generated Progression Map with GenAI
In July 2023, we wondered if we could generate progression maps to track players' journey across various game regions. We relied on our previously developed LoRA and the Game Design Document. Our approach involved utilizing AI generation tools to design specific environments while keeping the project's artistic style intact.
The generated resources are diverse, matching not only to the game’s look but also to the specificity of each region.
The LoRA’s character style successfully transferred to the backgrounds.
Dynamic prompts have proven to be extremely useful for automating the process.
The ControlNet Depth tends to desaturate images, requiring color correction.
Inpainting has not been utilized, but it’s not being dismissed for the future.
Keep our style
Our first step was to find a “recipe” that would allow us to consistently reproduce coherent scenery. To make this recipe, some terms to remember: Inference (when generating from a model), Fine Tuning (for LorA or custom models).
So, after a series of attempts to combine inference, fine tuning, and prompts in Stable diffusion, we managed to achieve a cohesive aesthetic across different universes.
Test for Greek background with Rev-Animated Checkpoint with Automatic 1111
Adapting our style
With our “recipe” in hand, the next step was to apply it. We referred to our Game Design Document and created a distinct prompt for each environment to be explored. Each environment is a zone where our hero navigates, divided into multiple maps.
After creating these prompts, we used a “ Dynamic prompt”, an extension of Stable Diffusion, to group all the generated prompts into a single one.
By merging this ”Dynamic prompt” with the one previously created to maintain our artistic style, we could now generate an additional asset for each of our available regions every time we create a new one.
Maps were generated for Scandinavia, China, Egypt, and Greece.
Always in pursuit of improvements, we used ControlNet to enhance the quality of our assets by adding relief or creating clearer pathways. We employed the ControlNet Depth to include a reference image for improving our landscapes. After several attempts, we found that reducing the reference’s importance by half yielded better results.
When employing a multitude of reference images, ControlNet tends to desaturate the images. The use of specific VAE helps reintroduce color, but color correction is often necessary.
The Artist's Touch
Once the “recipe” was established, we carried out several batches, and our artists selected usable images.
We then created a second batch using the same recipe, but this time we placed the chosen image in the ControlNet Depth reference with maximum fidelity. These images were given to our artists, who performed “photo bashing” to remove artifacts, adjust coloration, and create more legible paths.
Finally, the image underwent one final pass through Stable Diffusion to upscale it to 2048 px resolution before integration into the game.
Stable Diffusion output landscape
Landscape with color correction
Landscape reworked on Photoshop
In our game, the main screen covers all of the game’s features. Implementing a clear and unobtrusive map is essential to provide players with a clear view of their progression while leaving space for other interface elements, such as the combat zone or the shop. The process for implementing the map is relatively simple:
Adding files: Begin by adding the files to our project directory.
Data Linking: After adding the extra files, these files are linked in the game engine’s map data file, ensuring the correct display of checkpoints, bosses, and collectible resources.
UI Display: The main menu and its gameplay elements, along with the various user interfaces on the map that help players track their progress, are displayed.
The first attempts at the main menu for Pet training with map integration