Alongside the development of the battle scene, we embarked on creating the initial regional maps for our universe. We explored various approaches and ultimately chose the same method used in one of our other games, "Pet Training," with a little extra twist.
concept map with UI for Space Cruiser
Key Takeaways:
We managed to create interesting, beautiful, and diverse assets in just one hour.
Creating "original" photorealistic maps is more challenging than crafting characters.
Magnific AI significantly enhances the quality of our images.
MidJourney Asset
Our initial approach was Stable Diffusion. However, the results fell below our standards, whether in terms of photorealism or composition. Consequently, we turned to version 6 of MidJourney, as it had already proven successful in designing backgrounds for our battle scenes. However, after numerous trials, the elements lacked realism and distinct points of interest. Therefore, we decided to return to the method we had developed for our game "Pet Training" and enlisted our artists for preparatory work.
Digital painting concept trial on Midjourney.
Designing a Map with Multiple Points of Interest
The issue with MidJourney assets lay in their composition. While suitable for a classic illustration, it lacked originality and points of interest for game map design.
We modified the process by asking our artists to create a rough environment, with points of interest, to use them as a basis for image to image generations. As we are aiming for short production times, we asked them different versions, after 15, 30 or 60 minutes, to compare the generated assets quality, and the unequivocal result was that, whether using Canny or Depth references after 15 minutes, no significant difference was observed. On the contrary, in some cases, a too complex matte painting created undesirable artifacts.
Concurrently, our artists also learned to assist the machine in referencing, emphasizing or detailing certain areas to prevent their disappearance during generation or to ensure better machine interpretation. Allowing for the creation of custom points of interest, tailored to the scenario.
Matte painting concept at 15/30/60 minutes
Photo Bashing and Magnific AI
Following the previous tests, the next step was to generate batches of images, acknowledging that a perfect generated image does not exist. Therefore, we produced two sets of 15 images for "Canny" and "Depth" control nets, as these two sharpening methods emphasize different details and surprisingly complement each other. We then merged these images in Photoshop (also using its generative AI to eliminate certain artifacts) to obtain an interesting yet imperfect image.
We then explored different refining methods to try and enhance the quality of our image. After exploring Comfy UI, we opted for an alternative approach with a third-party application: Magnific AI. After a few tests, the result finally satisfied us.
Raw result by Stable Diffusion/ Photo bashing in Photoshop/ Magnific AI
Conclusion
For now, we have found our magic recipe, it may seem complex, but in reality, it enables the rapid creation of maps:
15 minutes for our artist
15 minutes for prompting + 30 minutes for batch rendering (machine time)
20 minutes for photo bashing and corrections
5 minutes for Magnific AI
5 minutes to integrate the map using our tools into our engine.
Nevertheless, despite this, we believe that we will soon be able to make improvements. This is partly due to the rapid improvement of tools that never cease to amaze us, and especially through the development of a LoRA once we have designed around forty maps. This would enable us to save time in the composition work, just as we successfully achieved in Pet Training. That's why I invite you to stay tuned for the next chapter of our adventure.
Comments