Recent, since April 2024, AI-generated manga is becoming increasingly common on platforms like X and Kindle Indies. Recent illustration-based AI image generators seem to be heralding a trend in manga creation.
Reference:SD Yellow Book
At AICU Media, we’ve researched which Stable Diffusion models are best suited for creating black-and-white manga for those interested in making manga with AI but unsure of which model to use!
Let’s start by generating a monochrome illustration using “Animagine 3.1,” the latest version of the mainstream anime-based SDXL model.
Prompt: “best quality, monochrome, lineart, 1girl, bob cut, flat chest, short hair, school uniform, round_eyewear, hand on hip, looking at viewer, open mouth, white background”
Negative Prompt: “worst quality, low quality, blush, lowres, bad anatomy, bad hands”
The cute high school girl looks adorable with her hand on her hip.
The elements in the prompt are well-generated, and the quality is good! The ink pooling expression and the shading on the shadow parts are cute.
Let’s try generating with the previous version, Animagine 3.0, using the same prompt as before.
Did you notice? The lines generated by Animagine 3.0 are much cleaner!
The difference in line resolution is quite clear. It seems that using the older version, Animagine 3.0, might be better when using Animagine.
We tried different versions of Animagine, but one distinctive feature of Animagine is the year tag. The year tag adjusts the antiquity of the art style.
For a detailed comparison and explanation, check here.
The year tags seem to be more effective in Animagine 3.1, but how much do they affect the art style when creating manga?
First, let’s try “oldest” and “newest” in Animagine 3.1.
Prompt: “best quality, monochrome, lineart, 1girl, school uniform, smile, looking at viewer, open mouth, white background, (year tag)”
Animagine 3.1’s “oldest” has a range of “2005 to 2010,” and it certainly has that vibe.
The ‘oldest’ one very accurately reproduces the old style! The simple eyes, the heavily shaded nose, and the uniformly thick hair are well adapted into monochrome. The ‘newest’ one also captures the trendy, glamorous, and clean atmosphere (2022 to 2023).
Additionally, compared to when no era tag is specified, the lines seem to be generated more cleanly. Is there such a characteristic?
Next, let’s try “oldest” and “masterpiece” in Animagine 3.0 using the same prompt.
masterpiece (Animagine3.0)
Compared to Animagine 3.1, the lines in Animagine 3.0 are slightly cleaner, but the art style difference is less pronounced. Animagine 3.1 has a more significant impact with the “oldest” tag.
As a result of generating and comparing, here are the findings:
We didn’t expect that using different versions depending on the situation would be the most effective approach, which made this experiment very interesting!
When checking the official model cards, we found slight differences in the year tags.
Animagine 3.0 Year Modifier
Year Tag Year Range
newest 2022 to 2023
late 2019 to 2021
mid 2015 to 2018
early 2011 to 2014
oldest 2005 to 2010
Regarding the year modifiers, we have redefined the year range to more accurately reflect the art styles of specific modern and vintage anime. This update focuses on the relationship between current and past eras, simplifying the range.
Year Tag Year Range
newest 2021 to 2024
recent 2018 to 2020
mid 2015 to 2017
early 2011 to 2014
oldest 2005 to 2010
Adding “comic” to the prompt can generate manga-like images! You might also get the “text-like lines” common in AI-generated images. Give it a try if you’re interested.
Using this material, we will create a manga using “Ibis Paint,” a free app available on smartphones and PCs.
For information on how to draw manga using ibisPaint, the Creative AI Lab at Digital Hollywood University Graduate School has compiled a guide in the form of a doujinshi, available at the Technical Book Fair. Please refer to it as well.
https://techbookfest.org/product/3ppEEBbj8PSmKvBr8J1nrk?productVariantID=9bJbpMRVp7Rvm9md1xiDF6
In the paid section, we cover detailed production videos and discuss the latest techniques and issues.
★ You can read it for free by reposting this relevant tweet.
漫画制作に適したモデルは?Animagine 3.1 vs Animagine 3.0 徹底比較! | AICU media @AICUai #note https://note.com/aicu/n/n393f2cebfc75
noteで記事を書きました!この投稿をリポストすると無料で記事を読むことができます。
https://twitter.com/AICUai/status/1779521551358173492
Here is the material.
Cute but not sure what she is saying!
https://www.youtube.com/watch?v=24UaN1v41Ms
This is how a single frame is created. Using prompts or ControlNet is recommended for expressions and posing.
For backgrounds, it might be better to generate them separately.
https://note.com/aicu/n/n49866300962f
Regarding manga creation using this method, there may be limitations on the characters that can generate stable line art with just a prompt. We compared Animagine 3.0 and Animagine 3.1 models, but since manga heavily relies on symbolic expression, there’s a possibility that similar characters might become overly common.
In such a case, we will need to develop various techniques to showcase originality.
Furthermore, Toriniku created the apps ‘Line2Normalmap,’ which converts line art into pseudo-3D images,and ‘NormalmapLighting,’ which is used for lighting those images.
The ‘Line2Normalmap’ app has gained worldwide attention, but Toriniku’s original art style is actually more charming!
There is also potential in using cheap art styles with LoRA.
❏Let’s create a character LoRA using VRoid Studio!
https://note.com/aicu/n/nba8393a4816e
❏Let’s use AI to learn and generate drawings from your childhood! #Drawing Time Machine
❏子供の時の絵を AI で学習、生成してみよう! #描画タイムマシン
Finally, let me introduce someone who is experimenting with the 3D pose doll feature in Clip Studio Paint (Clip Studio) and ControlNet. I’m not entirely sure about the details, though! (maybe NSFW)
❏Using 3D Drawing Dolls — Pose Change — “3D Manipulation #3” by ClipStudioOfficial
https://tips.clip-studio.com/ja-jp/articles/786
That’s all for how to create manga.
Which model is best for manga creation? If you’re choosing between Animagine 3.1 and Animagine 3.0, go with 3.0.
This information could change the public’s understanding of AI image generation. Please share it with everyone!
This has been brought to you by AICU Media, “Creating People Who Create”
Celebrating the reprint! Official online signing event for “Image Generation AI Stable Diffusion Start Guide”
Originally published at https://note.com on April 14, 2024.