Image-to-Image
Transformers
English
multimodal
nielsr HF Staff commited on
Commit
76a3858
Β·
verified Β·
1 Parent(s): 99a965e

Add paper abstract and link to Github repository

Browse files

This PR adds the paper abstract and a link to the Github repository, improving the information available on the model card.

Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -1,17 +1,18 @@
1
  ---
2
- license: mit
3
  language:
4
  - en
 
 
5
  pipeline_tag: image-to-image
6
  tags:
7
  - multimodal
8
- library_name: transformers
9
  ---
10
 
11
  ## πŸ”₯πŸ”₯πŸ”₯ News!!
12
  * Apr 25, 2025: πŸ‘‹ We release the inference code and model weights of Step1X-Edit. [inference code](https://github.com/stepfun-ai/Step1X-Edit)
13
  * Apr 25, 2025: πŸŽ‰ We have made our technical report available as open source. [Read](https://arxiv.org/abs/2504.17761)
14
 
 
15
  <!-- ## Image Edit Demos -->
16
 
17
  <div align="center">
 
1
  ---
 
2
  language:
3
  - en
4
+ library_name: transformers
5
+ license: mit
6
  pipeline_tag: image-to-image
7
  tags:
8
  - multimodal
 
9
  ---
10
 
11
  ## πŸ”₯πŸ”₯πŸ”₯ News!!
12
  * Apr 25, 2025: πŸ‘‹ We release the inference code and model weights of Step1X-Edit. [inference code](https://github.com/stepfun-ai/Step1X-Edit)
13
  * Apr 25, 2025: πŸŽ‰ We have made our technical report available as open source. [Read](https://arxiv.org/abs/2504.17761)
14
 
15
+
16
  <!-- ## Image Edit Demos -->
17
 
18
  <div align="center">