Facing Destiny_ FLUX_1-LoRA_IP-adapter Black Myth Wukong Live Image Practical Combat

Introduction

Black Forest Labs’ new text-to-image generation model FLUX.1 has caused a great stir in the open-source community since its launch, becoming the flagship of the new generation of text-to-image models. Although it has been open-sourced for a short time, the community open-source ecosystem surrounding Flux.1 is developing rapidly, including quantized versions, GGUF versions, as well as accompanying Controlnet and LoRA models that are now available on major open-source platforms. Meanwhile, the corresponding ComfyUI workflows facilitating the use of these models are also thriving.

In response to the recent learning demands of many community developers, we are launching a practical tutorial for model inference on FLUX.1-LoRA/IP-adapter + ComfyUI + the popular IP “Black Myth: Wukong,” allowing you to manipulate AI-generated Wukong images.
Note: This tutorial is for community learning purposes only and cannot be used for commercial purposes.

Best Practices

Environment Configuration and Installation:

  1. Python version 3.10 and above
  2. PyTorch version 1.12 and above is recommended, preferably version 2.0 and above.
  3. It is recommended to use CUDA 11.4 or higher.

    This article experiences free GPU computing power provided by the Modao community

orleg

Developers can also use the official image of modelscope to experience it on the cloud or their own devices.

GPU environment image (python3.10):

1
2
3
registry.cn-beijing.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.1.0-py310-torch2.1.2-tf2.14.0-1.13.1
registry.cn-hangzhou.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.1.0-py310-torch2.1.2-tf2.14.0-1.13.1
registry.us-west-1.aliyuncs.com/modelscope-repo/modelscope:ubuntu22.04-cuda12.1.0-py310-torch2.1.2-tf2.14.0-1.13.1

Download and deploy ComfyUI

Clone the code and install the relevant dependencies, link as follows:

https://github.com/comfyanonymous/ComfyUI

https://github.com/ltdrdata/ComfyUI-Manager

https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes.git

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
# #@title Environment Setup

from pathlib import Path

OPTIONS = {}
UPDATE_COMFY_UI = True #@param {type:"boolean"}
INSTALL_COMFYUI_MANAGER = True #@param {type:"boolean"}
INSTALL_CUSTOM_NODES_DEPENDENCIES = True #@param {type:"boolean"}
INSTALL_ComfyUI_Comfyroll_CustomNodes = True #@param {type:"boolean"}
INSTALL_x_flux_comfyui = True #@param {type:"boolean"}
OPTIONS['UPDATE_COMFY_UI'] = UPDATE_COMFY_UI
OPTIONS['INSTALL_COMFYUI_MANAGER'] = INSTALL_COMFYUI_MANAGER
OPTIONS['INSTALL_CUSTOM_NODES_DEPENDENCIES'] = INSTALL_CUSTOM_NODES_DEPENDENCIES
OPTIONS['INSTALL_ComfyUI_Comfyroll_CustomNodes'] = INSTALL_ComfyUI_Comfyroll_CustomNodes
OPTIONS['INSTALL_x_flux_comfyui'] = INSTALL_x_flux_comfyui

current_dir = !pwd
WORKSPACE = f"{current_dir[0]}/ComfyUI"



%cd /mnt/workspace/

![ ! -d $WORKSPACE ] && echo -= Initial setup ComfyUI =- && git clone https://github.com/comfyanonymous/ComfyUI
%cd $WORKSPACE

if OPTIONS['UPDATE_COMFY_UI']:
!echo "-= Updating ComfyUI =-"
!git pull


if OPTIONS['INSTALL_COMFYUI_MANAGER']:
%cd custom_nodes
![ ! -d ComfyUI-Manager ] && echo -= Initial setup ComfyUI-Manager =- && git clone https://github.com/ltdrdata/ComfyUI-Manager
%cd ComfyUI-Manager
!git pull

if OPTIONS['INSTALL_ComfyUI_Comfyroll_CustomNodes']:
%cd ..
!echo -= Initial setup ComfyUI_Comfyroll_CustomNodes =- && git clone https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes.git

if OPTIONS['INSTALL_x_flux_comfyui']:
!echo -= Initial setup x-flux-comfyui =- && git clone https://github.com/XLabs-AI/x-flux-comfyui.git

if OPTIONS['INSTALL_CUSTOM_NODES_DEPENDENCIES']:
!pwd
!echo "-= Install custom nodes dependencies =-"
![ -f "custom_nodes/ComfyUI-Manager/scripts/colab-dependencies.py" ] && python "custom_nodes/ComfyUI-Manager/scripts/colab-dependencies.py"

!pip install spandrel

Download some models (including the basic Flux.1 model, Lora, Controlnet, IP-adapter, etc.) and place them in the relevant subdirectories of the models directory. You can choose the models you wish to use and download them.

Thank you to the community developers for their contributions to Lora!

Model link:

https://www.liblib.art/modelinfo/5e4a4cc0e3674818a9f8454a63cc0115

Model link:

https://huggingface.co/wanghaofan/Black-Myth-Wukong-FLUX-LoRA

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#@markdown ###Download standard resources
%cd /mnt/workspace/ComfyUI
### FLUX1-DEV
!modelscope download --model=AI-ModelScope/flux-fp8 --local_dir ./models/unet/ flux1-dev-fp8.safetensors

### clip
!modelscope download --model=AI-ModelScope/flux_text_encoders --local_dir ./models/clip/ clip_l.safetensors
!modelscope download --model=AI-ModelScope/flux_text_encoders --local_dir ./models/clip/ t5xxl_fp8_e4m3fn.safetensors

### vae
!modelscope download --model=AI-ModelScope/FLUX.1-dev --local_dir ./models/vae/ ae.safetensors


### lora
!modelscope download --model=FluxLora/flux-koda --local_dir ./models/loras/ araminta_k_flux_koda.safetensors
!modelscope download --model=FluxLora/Black-Myth-Wukong-FLUX-LoRA --local_dir ./models/loras/ pytorch_lora_weights.safetensors
!modelscope download --model=FluxLora/FLUX1_wukong_lora --local_dir ./models/loras/ FLUX1_wukong_lora.safetensors

### ip-adapter
!modelscope download --model=FluxLora/flux-ip-adapter --local_dir ./models/xlabs/ipadapters/ flux-ip-adapter.safetensors
!modelscope download --model=FluxLora/flux-ip-adapter --local_dir ./models/clip_vision/ clip_vision_l.safetensors

Running ComfyUI with cloudflared

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
!wget "https://modelscope.oss-cn-beijing.aliyuncs.com/resource/cloudflared-linux-amd64.deb"
!dpkg -i cloudflared-linux-amd64.deb

%cd /mnt/workspace/ComfyUI
import subprocess
import threading
import time
import socket
import urllib.request

def iframe_thread(port):
while True:
time.sleep(0.5)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex(('127.0.0.1', port))
if result == 0:
break
sock.close()
print("\nComfyUI finished loading, trying to launch cloudflared (if it gets stuck here cloudflared is having issues)\n")

p = subprocess.Popen(["cloudflared", "tunnel", "--url", "http://127.0.0.1:{}".format(port)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for line in p.stderr:
l = line.decode()
if "trycloudflare.com " in l:
print("This is the URL to access ComfyUI:", l[l.find("http"):], end='')
#print(l, end='')


threading.Thread(target=iframe_thread, daemon=True, args=(8188,)).start()

!python main.py --dont-print-server

Load ComfyUI flowchart, link:

https://modelscope.oss-cn-beijing.aliyuncs.com/resource/workflow-flux-lora-simple.json

https://modelscope.oss-cn-beijing.aliyuncs.com/resource/workflow-flux-lora-wukong.json

https://modelscope.oss-cn-beijing.aliyuncs.com/resource/ip_adapter_workflow.json

The Lora process image is as follows:

ouh3v

The flowchart of the ip-adapter is as follows:

2ujzp

Video memory usage:

a7vy0

Note that after uploading the image, it is necessary to check whether the model file in the image matches the name of the model stored for download. You can simply click to associate it with the stored model file name like ckpt_name.

Work Sharing

Clad in battle armor, Buddha in the heart

76pgl

Face the destiny and be reborn through the fire

bhxmt

Put on your glasses and let’s play Magic Dock together!

ngzaf