A suite of custom nodes for ComfyUI aimed at enhancing user experience with more interactive and visually engaging widgets.
Most UI elements used by the frontend belong to the LF Widgets webcomponents library, a modern collection of modular and customizable webcomponents built on Stencil.js specifically to integrate with LF Nodes.
The nodes span quite a few categories:
- Analytics nodes: Visualize and track data, like checkpoint/LoRA usage or image histograms.
- Configuration nodes: Manage CivitAI metadata, and control the suite via the Control Panel.
- Filter nodes: Apply various filters to images, including blur, sharpen, and edge detection.
- Image manipulation nodes: Tools to manipulate images, such as filter and resize nodes.
- IO Operations nodes: Load and save files to/from the file system.
- JSON nodes: Tools to manipulate and display JSON data.
- Latent manipulation nodes: Latent decoding and encoding tools.
- LLM nodes: Interface with locally running LLMs, like the Messenger node, which also manages characters.
- Logic nodes: Control flow using simple switches.
- Primitive nodes: Work with primitive data types, offering features like history.
- Region nodes: Image region selection and editing tools.
- Seed generation nodes: Generate seeds for complex workflows.
- Selector nodes: Resource selection widgets with metadata display for models.
To see some example workflow you can check the example_workflows folder.
- Open ComfyUI Manager.
- Search LF Nodes.
- Install the node suite and restart ComfyUI.
- Go to the releases page.
- Download the latest release ZIP file.
- Extract the contents of the ZIP file into the
ComfyUI/custom_nodesfolder. - Restart ComfyUI.
- Go to the
ComfyUI/custom_nodesfolder. - Open a terminal.
- Copy and paste this command
git clone https://github.com/lucafoscili/lf-nodes.git.
The Workflow Runner miniapp is included in this package but is disabled by default.
To enable it you must set the environment variable WORKFLOW_RUNNER_ENABLED=true before starting ComfyUI.
By default the runner will not register its HTTP routes or static frontend when WORKFLOW_RUNNER_ENABLED is not set or set to a false value. This makes the runner opt-in and prevents accidental exposure of the runner endpoints.
Configuration is loaded from the repository-level .env file (at the project root).
Set WORKFLOW_RUNNER_ENABLED=true to enable the runner.
If you do enable it, ensure authentication/allowed-users are configured (see docs/WORKFLOW_RUNNER.md) to avoid unauthorised access.
To add a new workflow to the Workflow Runner, follow these steps:
-
Create the workflow JSON file
Export your ComfyUI workflow and save it asmodules/workflow_runner/workflows/<workflow_name>.json -
Create the workflow Python module
Create a corresponding Python filemodules/workflow_runner/workflows/<workflow_name>.pywith:- Workflow configuration function:
_configure(prompt, inputs)that maps user inputs to workflow node inputs - Input cells: Define
WorkflowCellobjects for each user input (uploads, text fields, checkboxes, etc.) - Output cells: Define
WorkflowCellobjects for each output (images, strings, JSON, etc.) - Workflow definition: Create a
WorkflowNodeobject with metadata and export it asWORKFLOW
- Workflow configuration function:
-
Register the workflow
Add your workflow module name to the_WORKFLOW_MODULEStuple inmodules/workflow_runner/workflows/__init__.py -
Update frontend types (if adding new output types)
If your workflow produces new output types:- Add the output interface to
web/workflow-runner/src/types/api.ts - Update
WorkflowNodeOutputsinterface to include your new type - Update output rendering in
web/workflow-runner/src/elements/components.tsandmain.outputs.ts
- Add the output interface to
-
Update node outputs (if needed)
If using custom nodes, ensure they return data in the expected format:- Set
OUTPUT_IS_LISTappropriately for batch/list outputs - Return structured data via
ui.lf_outputfor frontend consumption
- Set
Example commit: See commit 2fbb49e which adds the caption_image_vision workflow, demonstrating all these steps including updating LF_DisplayString to support string outputs and frontend components to render them.
The image editor node (LF_LoadAndEditImages) allows users to load images from disk and perform editing operations such as inpainting, adjusting saturations, brightness, contrast, and more, all through an interactive interface.
It's possible to select the images from the file system tree and then send them downstream to other nodes for further processing.
The LLM nodes were tested with Koboldcpp, but any Open AI-compatible endpoint that does not require authentication/an API key should work. The model used in the workflows samples is UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3 with ChaoticNeutrals/LLaVA-Llama-3-8B-mmproj-Updated.
Contributions to this repository are welcome, feel free to submit pull requests or open issues for discussion! To setup the environment clone this repository, then from the root open a terminal and run the command
pip install -r requirements.txt
This will install all the required dependencies for the Python backend.
To build the frontend, you will need to have Node.js and Yarn installed, then run the command
yarn setup
This command will install all the dependencies. Note that the repository includes the compiled frontend sources, so you can skip this step if you don't plan to modify the frontend.
yarn build
This command will compile all the frontend sources and generate/refresh the actual web directory.
MIT License
For an overview of how this nodes suite is structured and how it integrates with the UI, see:




