Skip to content

Latest commit

 

History

History
52 lines (29 loc) · 2.74 KB

File metadata and controls

52 lines (29 loc) · 2.74 KB

Guided Composition

This is where things get interesting. Text prompt or image-to-image can only bring you so far, the secret to a masterpiece lies in ControlNets and similar guided compositions.

For most of the workflows in this section you'll need the ComfyUI's ControlNet Auxiliary Preprocessors extension. Be sure to have it installed before going any further.

As always titles link directly to the workflow.

The workflow for all control nets is basically the same, what changes is the model that you use for processing and possibly the preprocessor.

In this first example we are using openpose to convert a stick man to an actual character.

open pose

You'll notice that each Controlnet or T2Adapter can be used with multiple preprocessors. For the pose the DWPreprocessor is very effective and generally yields to better results than the OpenPose preprocessor.

And incredibly effective and cheap in terms of resources used is canny. The preprocessor converts the image in a simplified "scribble" of the original image that can be used as reference for a new composition.

canny

The depth controlnet is a very effective controlnet that lets you define shapes and volumes in a 3D space. A good preprocessor for depth maps is Zoe.

👉 Note: It's not necessary to use a preprocessor as long as you have the right reference image. If you use Blender there's a very interesting tool for it. PoseMy.Art lets you create characters with an easy to use interface. Cascadeur is another professional solution for posing characters.

💡 Tip: For better results remember that you can always chain multiple controlnets. Check the experiments below.

💡 Tip: Remember to play with the strength of the Apply ControlNet node. It's usually a good idea to lower the strength to give the model a little leeway.

⚠️ Important: The controlnet has to match the version of the checkpoint that you are using. For SDXL you need controlnets specifically trained for it, same as v1.5 and 2.1.

Experiments

It's of course possible to chain multiple controlnets. This example we used openpose to pose the character and shuffle to spice up the composition.