File size: 7,458 Bytes
d078752 7aeaa4c d078752 e309fb3 83a2612 d078752 e309fb3 d078752 45377fb d078752 45377fb d078752 45377fb d078752 7aeaa4c 45377fb 7aeaa4c d078752 7aeaa4c d078752 7fe1cda 7aeaa4c 7fe1cda d078752 7aeaa4c d078752 7aeaa4c d078752 7aeaa4c d078752 7aeaa4c d078752 7aeaa4c d078752 7aeaa4c d078752 7aeaa4c 45377fb d078752 7aeaa4c d078752 7aeaa4c d078752 7aeaa4c d078752 45377fb d078752 7aeaa4c d078752 7aeaa4c d078752 7aeaa4c d078752 7aeaa4c d078752 7aeaa4c d078752 7aeaa4c d078752 45377fb 7aeaa4c 45377fb d078752 7aeaa4c d078752 7aeaa4c 45377fb 7aeaa4c 45377fb 7aeaa4c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 |
---
library_name: transformers
license: cc-by-nc-4.0
datasets:
- oumi-ai/oumi-c2d-d2c-subset
- oumi-ai/oumi-synthetic-claims
- oumi-ai/oumi-synthetic-document-claims
language:
- en
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---
[](https://github.com/oumi-ai/oumi)
[](https://github.com/oumi-ai/oumi)
[](https://oumi.ai/docs/en/latest/index.html)
[](https://oumi.ai/blog)
[](https://discord.gg/oumi)
# oumi-ai/HallOumi-8B-classifier
<!-- Provide a quick summary of what the model is/does. -->
Introducing **HallOumi-8B-classifier**, a _fast_ **SOTA hallucination detection model**, outperforming DeepSeek R1, OpenAI o1, Google Gemini 1.5 Pro, and Anthropic Sonnet 3.5 at only 8 billion parameters!
<!-- Give HallOumi a try now! -->
<!-- * Demo: https://oumi.ai/halloumi-demo -->
<!-- * Github: https://github.com/oumi-ai/oumi/tree/main/configs/projects/halloumi -->
| Model | Balanced Accuracy | Macro F1 Score | Open Source? | Model Size |
| --------------------- | ----------------- | --------------------------------------- | ------------ | ---------- |
| **HallOumi-8B-classifier** | **76.8% ± 2.0%** | **78.5% ± 2.1%** | ✔️ | 8B |
| Anthropic Sonnet 3.5 | 67.3% ± 2.7% | 69.6% ± 2.8% | ❌ | ?? |
| OpenAI o1-preview | 64.5% ± 2.0% | 65.9% ± 2.3% | ❌ | ?? |
| DeepSeek R1 | 60.7% ± 2.1% | 61.6% ± 2.5% | ✔️ | 671B |
| Llama 3.1 405B | 58.7% ± 1.7% | 58.8% ± 2.4% | ✔️ | 405B |
| Google Gemini 1.5 Pro | 52.9% ± 1.0% | 48.2% ± 1.8% | ❌ | ?? |
Demo GIF: TODO
**HallOumi-8B-classifier**, the hallucination classification model built with Oumi, is an end-to-end binary classification system that enables *fast and accurate* assessment of the hallucination probability of any written content (AI or human-generated).
* ✔️ Fast with high accuracy
* ✔️ Per-claim support (must call once per claim)
## Hallucinations
Hallucinations are often cited as the most important issue with being able to deploy generative models in numerous commercial and personal applications, and for good reason:
* [Lawyers sanctioned for briefing where ChatGPT cited 6 fictitious cases](https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/)
* [Air Canada required to honor refund policy made up by its AI support chatbot](https://www.wired.com/story/air-canada-chatbot-refund-policy/)
* [AI suggesting users should make glue pizza and eat rocks](https://www.bbc.com/news/articles/cd11gzejgz4o)
It ultimately comes down to an issue of **trust** — generative models are trained to produce outputs which are **probabilistically likely**, but not necessarily **true**.
While such tools are certainly useful in the right hands, being unable to trust them prevents AI from being adopted more broadly,
where it can be utilized safely and responsibly.
## Building Trust with Verifiability
To be able to begin trusting AI systems, we have to be able to verify their outputs. To verify, we specifically mean that we need to:
* Understand the **truthfulness** of a particular statement produced by any model.
* Understand what **information supports that statement’s truth** (or lack thereof)
* Have **full traceability** connecting the statement to that information.
Missing any one of these aspects results in a system that cannot be verified and therefore cannot be trusted;
however, this is not enough, as we have to be capable of doing these things in a way that is **meticulous**, **scalable**, and **human-readable**.
- **Developed by:** [Oumi AI](https://oumi.ai/)
- **Model type:** Small Language Model
- **Language(s) (NLP):** English
- **License:** [CC-BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/deed.en)
- **Finetuned from model:** [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct)
<!-- - **Demo:** [HallOumi Demo](https://oumi.ai/halloumi) -->
---
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
Use to verify claims/detect hallucinations in scenarios where a known source of truth is available.
<!-- Demo: https://oumi.ai/halloumi -->
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
Smaller LLMs have limited capabilities and should be used with caution. Avoid using this model for purposes outside of claim verification.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model was finetuned with Llama-3.1-405B-Instruct data on top of a Llama-3.1-8B-Instruct model, so any biases or risks associated with those models may be present.
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
Training data:
- [oumi-ai/oumi-synthetic-document-claims](https://huggingface.co/datasets/oumi-ai/oumi-synthetic-document-claims)
- [oumi-ai/oumi-synthetic-claims](https://huggingface.co/datasets/oumi-ai/oumi-synthetic-claims)
- [oumi-ai/oumi-anli-subset](https://huggingface.co/datasets/oumi-ai/oumi-anli-subset)
- [oumi-ai/oumi-c2d-d2c-subset](https://huggingface.co/datasets/oumi-ai/oumi-c2d-d2c-subset)
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
Training notebook: Coming Soon
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
Eval notebook: Coming Soon
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
- **Hardware Type:** A100-80GB
- **Hours used:** 1.5 (4 * 8 GPUs)
- **Cloud Provider:** Google Cloud Platform
- **Compute Region:** us-east5
- **Carbon Emitted:** 0.15 kg
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
```
@misc{oumiHalloumi8BClassifier,
author = {Achlioptas Panos, Jeremiah Greer, Aisopos Kostas, Schuler A. Michael, Elachqar Oussama, Koukoumidis Emmanouil},
title = {HallOumi-8B-classifier},
month = {March},
year = {2025},
url = {https://huggingface.co/oumi-ai/HallOumi-8B-classifier}
}
@software{oumi2025,
author = {Oumi Community},
title = {Oumi: an Open, End-to-end Platform for Building Large Foundation Models},
month = {January},
year = {2025},
url = {https://github.com/oumi-ai/oumi}
}
``` |