📖3D x AI Rendering
Last updated
Last updated
Using ControlNet, we designed an original character.
Based on the character design generated by AI, I created a 3D model. This process was primarily a manual task.
Using the image rendered from the 3D model as a base, I redrew the image using mainly ControlNet 1.1’s features. I focused on exploring whether the AI could improve the texture and switch the style of the image while maintaining the character’s traits created by the 3D model.
The main objective of this research was to explore how AI could be used in my own 3D production workflow.
As character design is an area where AI excels, and one I personally find challenging, I believed this would contribute significantly to my workflow.
While the improvement of rendering quality through AI is experimental, I believe that if AI can compensate for high-cost areas of 3D representation, it can fundamentally transform the workflow. In particular, I expected AI enhancement to be particularly beneficial for detailing hair and skin textures — these are tedious tasks where, despite the high 3D production costs, the overall contribution to the final image is often minimal, yet if not done carefully, can make the result look cheap.
I decided to design a character that has been living in my head for some time now. As the general direction was already decided, I started by generating an image with a prompt that matched my imagination to make it concrete.
Next, I used CN openpose to create a three-view drawing. Since it was difficult to specify color and facial features directly to AI, I made adjustments in Photoshop.
Finally, to further solidify the character’s image within me, I verified elements through hand-drawn sketches and concretized it as a input for CN scribble.
Since this was my first experiment, I aimed for a design that wouldn’t be too challenging for both AI and 3D. The final design didn’t necessarily warrant the use of AI, but the ability to create concrete visual references, and the opportunity to verbalize my internal image during the AI instructions, greatly facilitated the 3D work.
Since this process is mostly manual work, I will omit a detailed explanation of the entire workflow.
Anticipating subsequent AI enhancement, I specifically sought to save the effort to create skin and hair compared to my usual procedures.
Normally, I wrap a scan model on a sculpted model in ZBrush to transfer details for creating skin. This necessitates increasing the subdivision level of the model, resulting in increased processing load and making additional edits while maintaining detail quite tedious.
Anticipating that AI would improve skin texture, this time I simply set up procedural material in Substance Painter.
As a standalone 3D model, the resultant skin felt flat, but I believed that AI would effectively cover this aspect with its natural skin representation, and I worked accordingly.
In a regular process, after blocking with curves, I convert it to particle hair to represent individual strands.
Conversion to particle hair increases the rendering time and makes hair guide editing difficult. Moreover, due to the limitations of Blender’s hair particle function, some tricks are required to represent natural hair. (I haven’t been able to try the new hair system yet. I’d feel better about it if at least modifiers like Clump were stackable, so I wouldn’t have to look enviously at XGen…)
This time, I kept the hair as curve blocks, hoping that AI would generate a natural hair flow. As it’s easy to twist or spread the hair when in curve state, proceeding with the work in this state also has significant advantages in tasks such as posing.