mohammedsafvan commited on
Commit
457793d
Β·
verified Β·
1 Parent(s): e247907

Update RADME.md

Browse files
Files changed (1) hide show
  1. README.md +181 -197
README.md CHANGED
@@ -1,200 +1,184 @@
1
  ---
2
- library_name: transformers
3
- base_model:
4
- - Qwen/Qwen2.5-VL-3B-Instruct
 
 
 
 
 
5
  ---
6
 
7
- # Model Card for Model ID
8
-
9
- <!-- Provide a quick summary of what the model is/does. -->
10
-
11
-
12
-
13
- ## Model Details
14
-
15
- ### Model Description
16
-
17
- <!-- Provide a longer summary of what this model is. -->
18
-
19
- This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated.
20
-
21
- - **Developed by:** [More Information Needed]
22
- - **Funded by [optional]:** [More Information Needed]
23
- - **Shared by [optional]:** [More Information Needed]
24
- - **Model type:** [More Information Needed]
25
- - **Language(s) (NLP):** [More Information Needed]
26
- - **License:** [More Information Needed]
27
- - **Finetuned from model [optional]:** [More Information Needed]
28
-
29
- ### Model Sources [optional]
30
-
31
- <!-- Provide the basic links for the model. -->
32
-
33
- - **Repository:** [More Information Needed]
34
- - **Paper [optional]:** [More Information Needed]
35
- - **Demo [optional]:** [More Information Needed]
36
-
37
- ## Uses
38
-
39
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
40
-
41
- ### Direct Use
42
-
43
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
44
-
45
- [More Information Needed]
46
-
47
- ### Downstream Use [optional]
48
-
49
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
50
-
51
- [More Information Needed]
52
-
53
- ### Out-of-Scope Use
54
-
55
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
56
-
57
- [More Information Needed]
58
-
59
- ## Bias, Risks, and Limitations
60
-
61
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
62
-
63
- [More Information Needed]
64
-
65
- ### Recommendations
66
-
67
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
68
-
69
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
70
-
71
- ## How to Get Started with the Model
72
-
73
- Use the code below to get started with the model.
74
-
75
- [More Information Needed]
76
-
77
- ## Training Details
78
-
79
- ### Training Data
80
-
81
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
82
-
83
- [More Information Needed]
84
-
85
- ### Training Procedure
86
-
87
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
88
-
89
- #### Preprocessing [optional]
90
-
91
- [More Information Needed]
92
-
93
-
94
- #### Training Hyperparameters
95
-
96
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
97
-
98
- #### Speeds, Sizes, Times [optional]
99
-
100
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
101
-
102
- [More Information Needed]
103
-
104
- ## Evaluation
105
-
106
- <!-- This section describes the evaluation protocols and provides the results. -->
107
-
108
- ### Testing Data, Factors & Metrics
109
-
110
- #### Testing Data
111
-
112
- <!-- This should link to a Dataset Card if possible. -->
113
-
114
- [More Information Needed]
115
-
116
- #### Factors
117
-
118
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
119
-
120
- [More Information Needed]
121
-
122
- #### Metrics
123
-
124
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
125
-
126
- [More Information Needed]
127
-
128
- ### Results
129
-
130
- [More Information Needed]
131
-
132
- #### Summary
133
-
134
-
135
-
136
- ## Model Examination [optional]
137
-
138
- <!-- Relevant interpretability work for the model goes here -->
139
-
140
- [More Information Needed]
141
-
142
- ## Environmental Impact
143
-
144
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
145
-
146
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
147
-
148
- - **Hardware Type:** [More Information Needed]
149
- - **Hours used:** [More Information Needed]
150
- - **Cloud Provider:** [More Information Needed]
151
- - **Compute Region:** [More Information Needed]
152
- - **Carbon Emitted:** [More Information Needed]
153
-
154
- ## Technical Specifications [optional]
155
-
156
- ### Model Architecture and Objective
157
-
158
- [More Information Needed]
159
-
160
- ### Compute Infrastructure
161
-
162
- [More Information Needed]
163
-
164
- #### Hardware
165
-
166
- [More Information Needed]
167
-
168
- #### Software
169
-
170
- [More Information Needed]
171
-
172
- ## Citation [optional]
173
-
174
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
175
-
176
- **BibTeX:**
177
-
178
- [More Information Needed]
179
-
180
- **APA:**
181
-
182
- [More Information Needed]
183
-
184
- ## Glossary [optional]
185
-
186
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
187
-
188
- [More Information Needed]
189
-
190
- ## More Information [optional]
191
-
192
- [More Information Needed]
193
-
194
- ## Model Card Authors [optional]
195
-
196
- [More Information Needed]
197
-
198
- ## Model Card Contact
199
-
200
- [More Information Needed]
 
1
  ---
2
+ base_model: Qwen/Qwen2.5-VL-3B-Instruct
3
+ datasets:
4
+ - zackriya/diagramJSON
5
+ library_name: peft
6
+ tags:
7
+ - diagram
8
+ - structured-data
9
+ - image-processing
10
  ---
11
 
12
+ # πŸ–ΌοΈπŸ”— Diagram-to-Graph Model
13
+
14
+ <div align="center">
15
+ <img src="https://github.com/Zackriya-Solutions/diagram2graph/blob/main/docs/diagram2graph_cmpr.png?raw=true" width="800" style="border-radius:10px;" alt="Diagram to Graph Header"/>
16
+ </div>
17
+
18
+ This model is a research-driven project built during an internship at [Zackariya Solution](https://www.zackriya.com/). It specializes in extracting **structured data(JSON)** from images, particularly **nodes, edges, and their sub-attributes** to represent visual information as knowledge graphs.
19
+
20
+ > πŸš€ **Note:** This model is intended for **learning purposes** only and not for production applications. The extracted structured data may vary based on project needs.
21
+
22
+ ## πŸ“ Model Details
23
+
24
+ - **Developed by:** Zackariya Solution Internship Team(Mohammed Safvan)
25
+ - **Fine Tuned from:** `Qwen/Qwen2.5-VL-3B-Instruct`
26
+ - **License:** Apache 2.0
27
+ - **Language(s):** Multilingual (focus on structured extraction)
28
+ - **Model type:** Vision-Language Transformer (PEFT fine-tuned)
29
+
30
+ ## 🎯 Use Cases
31
+
32
+ ### βœ… Direct Use
33
+ - Experimenting with **diagram-to-graph conversion** πŸ“Š
34
+ - Understanding **AI-driven structured extraction** from images
35
+
36
+ ### πŸš€ Downstream Use (Potential)
37
+ - Enhancing **BPMN/Flowchart** analysis πŸ—οΈ
38
+ - Supporting **automated document processing** πŸ“„
39
+
40
+ ### ❌ Out-of-Scope Use
41
+ - Not designed for **real-world production** deployment ⚠️
42
+ - May not generalize well across **all diagram types**
43
+
44
+ ## πŸ“Š How to Use
45
+ ```python
46
+ %pip install -q "transformers>=4.49.0" accelerate datasets "qwen-vl-utils[decord]==0.0.8"
47
+
48
+ import os
49
+ import PIL
50
+ import torch
51
+ from qwen_vl_utils import process_vision_info
52
+ from transformers import Qwen2_5_VLForConditionalGeneration, Qwen2_5_VLProcessor
53
+
54
+
55
+ MODEL_ID="zackriya/diagram2graph"
56
+ MAX_PIXELS = 1280 * 28 * 28
57
+ MIN_PIXELS = 256 * 28 * 28
58
+
59
+
60
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
61
+ MODEL_ID,
62
+ device_map="auto",
63
+ torch_dtype=torch.bfloat16
64
+ )
65
+
66
+ processor = Qwen2_5_VLProcessor.from_pretrained(
67
+ MODEL_ID,
68
+ min_pixels=MIN_PIXELS,
69
+ max_pixels=MAX_PIXELS
70
+ )
71
+
72
+
73
+ SYSTEM_MESSAGE = """You are a Vision Language Model specialized in extracting structured data from visual representations of process and flow diagrams.
74
+ Your task is to analyze the provided image of a diagram and extract the relevant information into a well-structured JSON format.
75
+ The diagram includes details such as nodes and edges. each of them have their own attributes.
76
+ Focus on identifying key data fields and ensuring the output adheres to the requested JSON structure.
77
+ Provide only the JSON output based on the extracted information. Avoid additional explanations or comments."""
78
+
79
+ def run_inference(image):
80
+ messages= [
81
+ {
82
+ "role": "system",
83
+ "content": [{"type": "text", "text": SYSTEM_MESSAGE}],
84
+ },
85
+ {
86
+ "role": "user",
87
+ "content": [
88
+ {
89
+ "type": "image",
90
+ # this image is handled by qwen_vl_utils's process_visio_Info so no need to worry about pil image or path
91
+ "image": image,
92
+ },
93
+ {
94
+ "type": "text",
95
+ "text": "Extract data in JSON format, Only give the JSON",
96
+ },
97
+ ],
98
+ },
99
+ ]
100
+
101
+ text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
102
+ image_inputs, _ = process_vision_info(messages)
103
+
104
+ inputs = processor(
105
+ text=[text],
106
+ images=image_inputs,
107
+ return_tensors="pt",
108
+ )
109
+ inputs = inputs.to('cuda')
110
+
111
+ generated_ids = model.generate(**inputs, max_new_tokens=512)
112
+ generated_ids_trimmed = [
113
+ out_ids[len(in_ids):]
114
+ for in_ids, out_ids
115
+ in zip(inputs.input_ids, generated_ids)
116
+ ]
117
+
118
+ output_text = processor.batch_decode(
119
+ generated_ids_trimmed,
120
+ skip_special_tokens=True,
121
+ clean_up_tokenization_spaces=False
122
+ )
123
+ return output_text
124
+ image = eval_dataset[9]['image'] # PIL image
125
+ # `image` could be URL or relative path to the image
126
+ output = run_inference(image)
127
+
128
+ # JSON loading
129
+ import json
130
+ json.loads(output[0])
131
+ ```
132
+
133
+
134
+ ## πŸ—οΈ Training Details
135
+ - **Dataset:** Internally curated diagram dataset πŸ–ΌοΈ
136
+ - **Fine-tuning:** LoRA-based optimization ⚑
137
+ - **Precision:** bf16 mixed-precision training 🎯
138
+
139
+ ## πŸ“ˆ Evaluation
140
+
141
+ - **Metrics:** F1-score πŸ†
142
+ - **Limitations:** May struggle with **complex, dense diagrams** ⚠️
143
+ ## Results
144
+
145
+ - **+14% improvement in node detection**
146
+ - **+23% improvement in edge detection**
147
+
148
+ | Samples | (Base)Node F1 | (Fine)Node F1 | (Base)Edge F1 | (Fine)Edge F1 |
149
+ | --------------- | ------------- | ------------- | ------------- | ------------- |
150
+ | image_sample_1 | 0.46 | 1.0 | 0.59 | 0.71 |
151
+ | image_sample_2 | 0.67 | 0.57 | 0.25 | 0.25 |
152
+ | image_sample_3 | 1.0 | 1.0 | 0.25 | 0.75 |
153
+ | image_sample_4 | 0.5 | 0.83 | 0.15 | 0.62 |
154
+ | image_sample_5 | 0.72 | 0.78 | 0.0 | 0.48 |
155
+ | image_sample_6 | 0.75 | 0.75 | 0.29 | 0.67 |
156
+ | image_sample_7 | 0.6 | 1.0 | 1.0 | 1.0 |
157
+ | image_sample_8 | 0.6 | 1.0 | 1.0 | 1.0 |
158
+ | image_sample_9 | 1.0 | 1.0 | 0.55 | 0.77 |
159
+ | image_sample_10 | 0.67 | 0.8 | 0.0 | 1.0 |
160
+ | image_sample_11 | 0.8 | 0.8 | 0.5 | 1.0 |
161
+ | image_sample_12 | 0.67 | 1.0 | 0.62 | 0.75 |
162
+ | image_sample_13 | 1.0 | 1.0 | 0.73 | 0.67 |
163
+ | image_sample_14 | 0.74 | 0.95 | 0.56 | 0.67 |
164
+ | image_sample_15 | 0.86 | 0.71 | 0.67 | 0.67 |
165
+ | image_sample_16 | 0.75 | 1.0 | 0.8 | 0.75 |
166
+ | image_sample_17 | 0.8 | 1.0 | 0.63 | 0.73 |
167
+ | image_sample_18 | 0.83 | 0.83 | 0.33 | 0.43 |
168
+ | image_sample_19 | 0.75 | 0.8 | 0.06 | 0.22 |
169
+ | image_sample_20 | 0.81 | 1.0 | 0.23 | 0.75 |
170
+ | **Mean** | 0.749 | **0.891** | 0.4605 | **0.6945** |
171
+
172
+
173
+ ## 🀝 Collaboration
174
+ Are you interested in fine tuning your own model for your use case or want to explore how we can help you? Let's collaborate.
175
+
176
+ [Zackriya Solutions](https://www.zackriya.com/collaboration-form)
177
+
178
+ ## πŸ”— References
179
+ - [Roboflow](https://github.com/roboflow/notebooks/blob/main/notebooks/how-to-finetune-qwen2-5-vl-for-json-data-extraction.ipynb)
180
+ - [Qwen](https://huggingface.co/Qwen/Qwen2.5-VL-3B-Instruct)
181
+
182
+ <h3 align='center'>
183
+ πŸš€Stay Curious & Keep Exploring!πŸš€
184
+ </h3>