Spaces:
Running
on
Zero
peft_config message and odd behavior
My previous copy of this space suddenly gets runtime error with the florence model, so I duplicated this one again and added my lora list.
All generations errored out near the end, so I changed the zerogpu duration in app.py and app-nonyieldlast.py to 120s which helped.
But there are still some issues.
1: Some of my generations now suddenly are in B&W for some reason.
2: Models seem to behave differently with regards to output compared to earlier. No idea why.
3: I get the following message in the logs when running my prompts. It doesn’t error out, but caught my eye nonetheless. What causes this? Could it be related to what I am describing?
INFO:peft.tuners.tuners_utils:Already found a peft_config
attribute in the model. This will lead to having multiple adapters in the model. Make sure to know what you are doing!
I'll check the other malfunctions tomorrow (2:25 here😪), but the problem with Florence2 is due to a change in the way the Hugging Face library, or rather, the Hugging Face tokens, are handled. I think it can be avoided using one of the following methods for older ones.
A. Set /home/user/huggingface
to HF_HOME
environment variable (It doesn't have to be /home/user/huggingface
)
B. Set huggingface_hub==0.25.2
in requirements.txt
Thanks, I will start by trying option B either the requirements.txt. I noticed the new variable had been added and was unsure whether I was supposed to change user to my profile or not so I didn’t change it. Could that cause the issues?
Okay that gave me this:
INFO: pip is looking at multiple versions of diffusers to determine which version is compatible with other requirements. This could take a while.
ERROR: Cannot install -r /tmp/requirements.txt (line 2) and huggingface_hub==0.25.2 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested huggingface_hub==0.25.2
diffusers 0.33.0.dev0 depends on huggingface-hub>=0.27.0
To fix this you could try to:
- loosen the range of package versions you've specified
- remove package versions to allow pip to attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
--> ERROR: process "/bin/sh -c pip install --no-cache-dir -r /tmp/requirements.txt" did not complete successfully: exit code: 1
Could that cause the issues?
Possibly.
Hmmm... Try huggingface-hub==0.27.0
. And, at the time this space was created, it was necessary for the Diffusers to be in dev version, but now there is no problem with the pip version's functions.
Anyway, the culprit of the Florence2 incident is the change in the token specification of Hugging Face.
#git+https://github.com/huggingface/diffusers.git
diffusers
#git+https://github.com/huggingface/peft.git
peft
#git+https://github.com/huggingface/accelerate.git
accelerate
huggingface-hub<=0.27.0
transformers==4.48.3 # 4.90.0 seems to have bug with Diffusers
Changed requirements to:
torch
git+https://github.com/huggingface/diffusers.git
git+https://github.com/huggingface/transformers.git
git+https://github.com/huggingface/peft.git
git+https://github.com/huggingface/accelerate.git
sentencepiece
torchvision
huggingface_hub<=0.27.0
transformers==4.48.3
timm
einops
controlnet_aux
kornia
numpy<2
opencv-python
deepspeed
mediapipe
openai==1.37.0
translatepy
unidecode
Still getting error:
ERROR: Cannot install transformers 4.50.0.dev0 (from git+https://github.com/huggingface/transformers.git) and transformers==4.48.3 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested transformers 4.50.0.dev0 (from git+https://github.com/huggingface/transformers.git)
The user requested transformers==4.48.3
To fix this you could try to:
- loosen the range of package versions you've specified
- remove package versions to allow pip to attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
--> ERROR: process "/bin/sh -c pip install --no-cache-dir -r /tmp/requirements.txt" did not complete successfully: exit code: 1
Are there several files with these edits I have to look at?
Try like this. git+git_url will bring you the latest development version without question. It's good that it's new, but it often has bugs.😅
torch==2.4.0
diffusers
peft
accelerate
sentencepiece
torchvision
huggingface_hub<=0.27.0
transformers==4.48.3
timm
einops
controlnet_aux
kornia
numpy<2
opencv-python
deepspeed
mediapipe
openai>=1.37.0
translatepy
unidecode
You are a lifesaver, that fixed it!
If you don’t mind me asking, how much implementation have you done with the controlnets in this space? I noticed some «under construction» text in the code around the controlnets.
I’m likely going to stick with my version for now, but keeping a copy of the current space version too to play around with.
As far as I can see, you have added extra lora slots and baked in the hyper/turbo loras and added the new flux tools as well as a sigmas factor slider.
For some reason unknown to me, this version takes forever to prepare inference, so that’s annoying. No idea why.
this version takes forever to prepare inference, so that’s annoying. No idea why.
Hmm... I didn't make any changes like that...
Is it because of an update to some library...?
If you don’t mind me asking, how much implementation have you done with the controlnets in this space? I noticed some «under construction» text in the code around the controlnets
In theory, it works. However, because it often doesn't work in practice due to the troublesome specifications of Zero GPU space, it is sealed...
Unsealing it is easy, just set visible=False to True.
It could be that I edited the gpu duration a bit eagerly and caused it to wait for some reason. I don’t know if that could be a culprit.
Do you know if it’s possible to have multiple collapsable Lora selection windows that list Loras from a given collection?
Let’s say I have a private collection with character Loras, another one with category x and y and so forth?
That way, one wouldn’t have to edit the Loras.json every time and could do Lora management on a collection-level instead of inside the space.
Do you know if it’s possible to have multiple collapsable Lora selection windows that list Loras from a given collection?
Programmatically, it's simple: you just need to put multiple galleries in Accordion or Tab, and automatically generate the choices when the program starts up, but this is not compatible with the original program of Spaces...
I was looking for a way to expand it nicely before, but I was still in the middle of it.