Skip to content

Lumi-node/design2web

Repository files navigation

Design2Web

Design2Web

Converting static design mockups into runnable HTML/CSS web pages.

License Badge Python Version Badge Test Count Badge


Design2Web is a proof-of-concept Python tool designed to automate the tedious process of translating static visual design mockups (PNG or JPG) into functional, albeit minimal, HTML and CSS code. It attempts to infer the structural layout of a design by analyzing image characteristics such as color distribution and edge detection.

This project serves as an exploration into the feasibility of raster-to-code conversion using basic computer vision techniques. While it demonstrates the core pipeline—from image loading to HTML generation—it highlights the architectural challenges of inferring semantic structure from unstructured visual data.


Quick Start

First, ensure you have Python 3.10 or newer installed. Then, install the necessary dependencies:

pip install design2web

To convert a design mockup, you would typically use the main entry point function:

from design2web.main import convert_design

# Assuming 'mockup.png' is your design file
output_path = convert_design("mockup.png")
print(f"HTML generated successfully at: {output_path}")

What Can You Do?

Layout Detection

The tool analyzes the input image to segment major UI components (e.g., header, sidebar, content area) using basic image processing techniques.

from design2web.layout_detector import detect_layout_regions
from PIL import Image

img = Image.open("design.jpg")
regions = detect_layout_regions(img)
# regions might contain bounding boxes for detected areas

Color Extraction

It samples dominant color palettes from the identified regions to apply them as CSS variables in the generated output.

from design2web.color_extractor import extract_colors
from PIL import Image

img = Image.open("design.jpg")
palette = extract_colors(img)
print(f"Extracted Palette: {palette}")

HTML Generation

Based on the detected regions and extracted colors, the system constructs a semantic HTML structure, wrapping components in appropriate div elements.

from design2web.html_generator import generate_html_structure

# Assuming 'regions' from layout detection
html_content = generate_html_structure(regions)
# html_content is the raw HTML string

Architecture

The system follows a sequential pipeline architecture:

  1. image_loader.py: Handles reading and preprocessing the input raster image.
  2. layout_detector.py: Takes the image and applies algorithms (e.g., brightness/edge analysis) to segment the image into logical UI regions.
  3. color_extractor.py: Samples colors from these detected regions to build a design palette.
  4. html_generator.py: Consumes the region data and color palette to construct the semantic HTML markup.
  5. output_writer.py: Writes the final HTML and associated CSS into the specified output file path.
graph LR
    A[Input Image (PNG/JPG)] --> B(image_loader.py);
    B --> C{layout_detector.py};
    C --> D[Detected Regions];
    B --> E(color_extractor.py);
    D & E --> F(html_generator.py);
    F --> G(output_writer.py);
    G --> H[Output HTML/CSS];
Loading

API Reference

design2web.main.convert_design(image_path: str) -> str

The primary entry point. Reads the image, runs the full pipeline, and returns the path to the generated HTML file.

Example:

path = convert_design("my_design.png")
# path will be the string path to the output file

design2web.layout_detector.detect_layout_regions(image: Image.Image) -> list

Identifies bounding boxes for major UI components.

Signature: detect_layout_regions(image: Image.Image) -> list Returns: A list of region objects/dictionaries.

Research Background

This project is inspired by the growing field of visual programming and automated UI generation. The core methodology relies on classical image processing techniques (thresholding, contour detection) to approximate semantic structure, drawing conceptual parallels to early computer vision applications in document analysis.

For more advanced, production-ready solutions, research into multimodal large language models (e.g., GPT-4V) or structured design API integrations (e.g., Figma API) is recommended, as they provide superior semantic understanding over raster analysis.

Testing

The project includes 8 dedicated test files located in the tests/ directory, utilizing pytest fixtures to ensure the integrity of the pipeline components.

Contributing

Contributions are welcome! If you find bugs, have suggestions for improvement, or wish to enhance the layout detection algorithms, please feel free to open an issue or submit a pull request.

Citation

This project is a proof-of-concept and does not cite specific academic papers for its basic image processing implementation, relying instead on standard libraries.

License

The project is licensed under the MIT License - see the LICENSE file for details.

About

Converts JSON design specs into runnable HTML/CSS/JS web apps.

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors