Update README.md
Browse files
README.md
CHANGED
@@ -2,13 +2,13 @@
|
|
2 |
tags:
|
3 |
- text-to-image
|
4 |
- stable-diffusion
|
|
|
|
|
5 |
- lora
|
6 |
- dalle-3
|
7 |
- dalle
|
8 |
- deepvision
|
9 |
- diffusers
|
10 |
-
- template:sd-lora
|
11 |
-
- openskyml
|
12 |
widget:
|
13 |
- text: reimagine the ZX Spectrum Game MANIC MINER as a 3D modern style game
|
14 |
output:
|
@@ -40,10 +40,56 @@ library_name: diffusers
|
|
40 |
|
41 |
## Model description
|
42 |
|
43 |
-
This is a test model
|
|
|
|
|
44 |
|
45 |
By KVI Kontent
|
46 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
## Official demo
|
48 |
|
49 |
-
You can use official demo on Spaces: [try](https://huggingface.co/spaces/kvikontent/
|
|
|
2 |
tags:
|
3 |
- text-to-image
|
4 |
- stable-diffusion
|
5 |
+
- kviai
|
6 |
+
- midjourney
|
7 |
- lora
|
8 |
- dalle-3
|
9 |
- dalle
|
10 |
- deepvision
|
11 |
- diffusers
|
|
|
|
|
12 |
widget:
|
13 |
- text: reimagine the ZX Spectrum Game MANIC MINER as a 3D modern style game
|
14 |
output:
|
|
|
40 |
|
41 |
## Model description
|
42 |
|
43 |
+
This is a test model like Dall-E 3.
|
44 |
+
|
45 |
+
Estimated generateion time is ~ 40 seconds on gpu
|
46 |
|
47 |
By KVI Kontent
|
48 |
|
49 |
+
## Usage
|
50 |
+
|
51 |
+
You can try out model using Huggingface Interface API, here:
|
52 |
+
```Python
|
53 |
+
import requests
|
54 |
+
import io
|
55 |
+
from PIL import *
|
56 |
+
|
57 |
+
API_URL = "https://api-inference.huggingface.co/models/Kvikontent/kviimager2.0"
|
58 |
+
headers = {"Authorization": "Bearer huggingface_api_token"}
|
59 |
+
|
60 |
+
def query(payload):
|
61 |
+
response = requests.post(API_URL, headers=headers, json=payload)
|
62 |
+
return response.content
|
63 |
+
|
64 |
+
image_bytes = query({
|
65 |
+
"inputs": "Astronaut riding a horse",
|
66 |
+
})
|
67 |
+
|
68 |
+
image = Image.open(io.BytesIO(image_bytes))
|
69 |
+
image.save("generated_image.jpg")
|
70 |
+
```
|
71 |
+
or using Diffusers library (requires pytorch and transformers too):
|
72 |
+
```Python
|
73 |
+
from diffusers import DiffusionPipeline
|
74 |
+
import io
|
75 |
+
from PIL import *
|
76 |
+
|
77 |
+
pipeline = DiffusionPipeline.from_pretrained("stablediffusionapi/juggernaut-xl-v5")
|
78 |
+
pipeline.load_lora_weights("Kvikontent/kviimager2.0")
|
79 |
+
|
80 |
+
prompt = "Astronaut riding a horse"
|
81 |
+
|
82 |
+
image_bytes = pipeline(prompt)
|
83 |
+
image = Image.open(io.BytesIO(image_bytes))
|
84 |
+
image.save("generated_image.jpg")
|
85 |
+
```
|
86 |
+
|
87 |
+
## Credits
|
88 |
+
|
89 |
+
* Author - Vasiliy Katsyka
|
90 |
+
* Company - [KVIAI](https://hf.co/kviai)
|
91 |
+
* Licence - Openrail
|
92 |
+
|
93 |
## Official demo
|
94 |
|
95 |
+
You can use official demo on Spaces: [try](https://huggingface.co/spaces/kvikontent/kviimager2.0).
|