Not too sure why this is happening. Everything installed accordingly but the "Generate" fetches 15 files, the GPU spins up and then I get the log below.
Both stable-diffusion-v1-4 and 1-5 have been cloned through Huggingface.co and User Access token is pasted in the application.
Do I need to edit something to point towards the .ckpt model of Stable-Diffusion 1.4?
Fetching 15 files: 100%|█████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 7505.91it/s]
The config attributes {'clip_sample': False} were passed to PNDMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Traceback (most recent call last):
File "C:\AI\UnstableFusion\unstablefusion.py", line 897, in handle_generate_button
if type(self.get_handler()) == ServerStableDiffusionHandler:
File "C:\AI\UnstableFusion\unstablefusion.py", line 460, in get_handler
return self.stable_diffusion_manager.get_handler()
File "C:\AI\UnstableFusion\unstablefusion.py", line 329, in get_handler
return self.get_local_handler(self.get_huggingface_token())
File "C:\AI\UnstableFusion\unstablefusion.py", line 312, in get_local_handler
self.cached_local_handler = StableDiffusionHandler(token)
File "C:\AI\UnstableFusion\diffusionserver.py", line 36, in init
self.text2img = StableDiffusionPipeline.from_pretrained(
File "C:\Users\Jeff\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipeline_utils.py", line 516, in from_pretrained
raise ValueError(
ValueError: The component <class 'transformers.models.clip.image_processing_clip.CLIPImageProcessor'> of <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> cannot be loaded as it does not seem to have any of the loading methods defined in {'ModelMixin': ['save_pretrained', 'from_pretrained'], 'SchedulerMixin': ['save_config', 'from_config'], 'DiffusionPipeline': ['save_pretrained', 'from_pretrained'], 'OnnxRuntimeModel': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizer': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizerFast': ['save_pretrained', 'from_pretrained'], 'PreTrainedModel': ['save_pretrained', 'from_pretrained'], 'FeatureExtractionMixin': ['save_pretrained', 'from_pretrained']}.
Not too sure why this is happening. Everything installed accordingly but the "Generate" fetches 15 files, the GPU spins up and then I get the log below.
Both stable-diffusion-v1-4 and 1-5 have been cloned through Huggingface.co and User Access token is pasted in the application.
Do I need to edit something to point towards the .ckpt model of Stable-Diffusion 1.4?
Fetching 15 files: 100%|█████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 7505.91it/s]
The config attributes {'clip_sample': False} were passed to PNDMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Traceback (most recent call last):
File "C:\AI\UnstableFusion\unstablefusion.py", line 897, in handle_generate_button
if type(self.get_handler()) == ServerStableDiffusionHandler:
File "C:\AI\UnstableFusion\unstablefusion.py", line 460, in get_handler
return self.stable_diffusion_manager.get_handler()
File "C:\AI\UnstableFusion\unstablefusion.py", line 329, in get_handler
return self.get_local_handler(self.get_huggingface_token())
File "C:\AI\UnstableFusion\unstablefusion.py", line 312, in get_local_handler
self.cached_local_handler = StableDiffusionHandler(token)
File "C:\AI\UnstableFusion\diffusionserver.py", line 36, in init
self.text2img = StableDiffusionPipeline.from_pretrained(
File "C:\Users\Jeff\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipeline_utils.py", line 516, in from_pretrained
raise ValueError(
ValueError: The component <class 'transformers.models.clip.image_processing_clip.CLIPImageProcessor'> of <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> cannot be loaded as it does not seem to have any of the loading methods defined in {'ModelMixin': ['save_pretrained', 'from_pretrained'], 'SchedulerMixin': ['save_config', 'from_config'], 'DiffusionPipeline': ['save_pretrained', 'from_pretrained'], 'OnnxRuntimeModel': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizer': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizerFast': ['save_pretrained', 'from_pretrained'], 'PreTrainedModel': ['save_pretrained', 'from_pretrained'], 'FeatureExtractionMixin': ['save_pretrained', 'from_pretrained']}.