Update README.md
Browse files
README.md
CHANGED
@@ -3,24 +3,24 @@ library_name: transformers
|
|
3 |
license: apache-2.0
|
4 |
base_model: answerdotai/ModernBERT-base
|
5 |
tags:
|
6 |
-
- reasoning
|
7 |
-
- reasoning-datasets-competition
|
8 |
datasets:
|
9 |
-
- davanstrien/natural-reasoning-classifier
|
10 |
language:
|
11 |
-
- en
|
12 |
metrics:
|
13 |
-
- mse
|
14 |
-
- mae
|
15 |
-
- spearman
|
16 |
widget:
|
17 |
-
- text: >-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
---
|
25 |
|
26 |
# ModernBERT Reasoning Complexity Regressor
|
@@ -29,7 +29,9 @@ widget:
|
|
29 |
|
30 |
## Model Description
|
31 |
|
32 |
-
This model predicts the reasoning complexity level (0-4)
|
|
|
|
|
33 |
|
34 |
The reasoning complexity scale ranges from:
|
35 |
|
@@ -60,10 +62,10 @@ This model can be used to:
|
|
60 |
- Predictions are influenced by the original dataset's domain distribution
|
61 |
- Reasoning complexity is subjective and context-dependent
|
62 |
|
63 |
-
|
64 |
## Training
|
65 |
|
66 |
The model was fine-tuned using a regression objective with the following settings:
|
|
|
67 |
- Learning rate: 5e-05
|
68 |
- Batch size: 16
|
69 |
- Optimizer: AdamW
|
@@ -75,20 +77,20 @@ The model was fine-tuned using a regression objective with the following setting
|
|
75 |
### Using the pipeline API
|
76 |
|
77 |
```python
|
78 |
-
from transformers import pipeline
|
79 |
pipe = pipeline("text-classification", model="davanstrien/ModernBERT-based-Reasoning-Required")
|
80 |
|
81 |
def predict_reasoning_level(text, pipe):
|
82 |
# Get the raw prediction
|
83 |
result = pipe(text)
|
84 |
score = result[0]['score']
|
85 |
-
|
86 |
# Round to nearest integer (optional)
|
87 |
rounded_score = round(score)
|
88 |
-
|
89 |
# Clip to valid range (0-4)
|
90 |
rounded_score = max(0, min(4, rounded_score))
|
91 |
-
|
92 |
# Create a human-readable interpretation (optional)
|
93 |
reasoning_labels = {
|
94 |
0: "No reasoning",
|
@@ -97,7 +99,7 @@ def predict_reasoning_level(text, pipe):
|
|
97 |
3: "Strong reasoning",
|
98 |
4: "Advanced reasoning"
|
99 |
}
|
100 |
-
|
101 |
return {
|
102 |
"raw_score": score,
|
103 |
"reasoning_level": rounded_score,
|
@@ -130,10 +132,8 @@ text = "The debate on artificial intelligence's role in society has become incre
|
|
130 |
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
|
131 |
with torch.no_grad():
|
132 |
outputs = model(**inputs)
|
133 |
-
|
134 |
# Get regression score
|
135 |
complexity_score = outputs.logits.item()
|
136 |
print(f"Reasoning Complexity: {complexity_score:.2f}/4.00")
|
137 |
```
|
138 |
-
|
139 |
-
|
|
|
3 |
license: apache-2.0
|
4 |
base_model: answerdotai/ModernBERT-base
|
5 |
tags:
|
6 |
+
- reasoning
|
7 |
+
- reasoning-datasets-competition
|
8 |
datasets:
|
9 |
+
- davanstrien/natural-reasoning-classifier
|
10 |
language:
|
11 |
+
- en
|
12 |
metrics:
|
13 |
+
- mse
|
14 |
+
- mae
|
15 |
+
- spearman
|
16 |
widget:
|
17 |
+
- text: >-
|
18 |
+
The debate on artificial intelligence's role in society has become
|
19 |
+
increasingly polarized. Some argue that AI will lead to widespread
|
20 |
+
unemployment and concentration of power, while others contend it will create
|
21 |
+
new jobs and democratize access to knowledge. These viewpoints reflect
|
22 |
+
different assumptions about technological development, economic systems, and
|
23 |
+
human adaptability.
|
24 |
---
|
25 |
|
26 |
# ModernBERT Reasoning Complexity Regressor
|
|
|
29 |
|
30 |
## Model Description
|
31 |
|
32 |
+
This model predicts the reasoning complexity level (0-4) that a given web text suggests. It's fine-tuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the [davanstrien/natural-reasoning-classifier](https://huggingface.co/datasets/davanstrien/natural-reasoning-classifier) dataset. The intended use for the model is in a pipeline to try and identify text that may be useful for generating reasoning data.
|
33 |
+
|
34 |
+
### Reasoning Complexity Scale
|
35 |
|
36 |
The reasoning complexity scale ranges from:
|
37 |
|
|
|
62 |
- Predictions are influenced by the original dataset's domain distribution
|
63 |
- Reasoning complexity is subjective and context-dependent
|
64 |
|
|
|
65 |
## Training
|
66 |
|
67 |
The model was fine-tuned using a regression objective with the following settings:
|
68 |
+
|
69 |
- Learning rate: 5e-05
|
70 |
- Batch size: 16
|
71 |
- Optimizer: AdamW
|
|
|
77 |
### Using the pipeline API
|
78 |
|
79 |
```python
|
80 |
+
from transformers import pipeline
|
81 |
pipe = pipeline("text-classification", model="davanstrien/ModernBERT-based-Reasoning-Required")
|
82 |
|
83 |
def predict_reasoning_level(text, pipe):
|
84 |
# Get the raw prediction
|
85 |
result = pipe(text)
|
86 |
score = result[0]['score']
|
87 |
+
|
88 |
# Round to nearest integer (optional)
|
89 |
rounded_score = round(score)
|
90 |
+
|
91 |
# Clip to valid range (0-4)
|
92 |
rounded_score = max(0, min(4, rounded_score))
|
93 |
+
|
94 |
# Create a human-readable interpretation (optional)
|
95 |
reasoning_labels = {
|
96 |
0: "No reasoning",
|
|
|
99 |
3: "Strong reasoning",
|
100 |
4: "Advanced reasoning"
|
101 |
}
|
102 |
+
|
103 |
return {
|
104 |
"raw_score": score,
|
105 |
"reasoning_level": rounded_score,
|
|
|
132 |
inputs = tokenizer(text, return_tensors="pt", truncation=True, max_length=512)
|
133 |
with torch.no_grad():
|
134 |
outputs = model(**inputs)
|
135 |
+
|
136 |
# Get regression score
|
137 |
complexity_score = outputs.logits.item()
|
138 |
print(f"Reasoning Complexity: {complexity_score:.2f}/4.00")
|
139 |
```
|
|
|
|