Update README.md
Browse files
README.md
CHANGED
@@ -13,12 +13,12 @@ tags:
|
|
13 |
CodeLlama-7B-QML is a large language model customized by the Qt Company for Fill-In-The-Middle code completion tasks in the QML programming language, especially for Qt Quick Controls compliant with Qt 6 releases. The CodeLlama-7B-QML model is designed for companies and individuals that want to self-host their LLM for HMI (Human Machine Interface) software development instead of relying on third-party hosted LLMs.
|
14 |
|
15 |
This model reaches a score of 79% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6-compliant code. In comparison, the model scored:
|
16 |
-
- CodeLlama-13B-QML (finetuned model from Qt): 79
|
17 |
-
- Claude 3.7 Sonnet: 76
|
18 |
-
- Claude 3.5 Sonnet: 68
|
19 |
-
- CodeLlama 13B: 66
|
20 |
-
- GPT-4o: 62
|
21 |
-
- CodeLlama 7B: 61
|
22 |
|
23 |
This model was fine-tuned based on raw data from over 5000 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-7B-QML is not optimised for the creation of Qt5-release compliant, C++, or Python code.
|
24 |
|
@@ -27,7 +27,7 @@ By accessing this model, you are agreeing to the Llama 2 terms and conditions of
|
|
27 |
|
28 |
## Usage:
|
29 |
|
30 |
-
CodeLlama-7B-QML
|
31 |
|
32 |
Large Language Models, including CodeLlama-7B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.
|
33 |
|
@@ -52,7 +52,7 @@ https://ollama.com/download
|
|
52 |
|
53 |
#### 4. Build the model in ollama
|
54 |
```
|
55 |
-
ollama create theqtcompany/codellama-7b-
|
56 |
```
|
57 |
The model's name must be exactly as above if one wants to use the model in the Qt Creator
|
58 |
|
|
|
13 |
CodeLlama-7B-QML is a large language model customized by the Qt Company for Fill-In-The-Middle code completion tasks in the QML programming language, especially for Qt Quick Controls compliant with Qt 6 releases. The CodeLlama-7B-QML model is designed for companies and individuals that want to self-host their LLM for HMI (Human Machine Interface) software development instead of relying on third-party hosted LLMs.
|
14 |
|
15 |
This model reaches a score of 79% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6-compliant code. In comparison, the model scored:
|
16 |
+
- CodeLlama-13B-QML (finetuned model from Qt): 79%
|
17 |
+
- Claude 3.7 Sonnet: 76%
|
18 |
+
- Claude 3.5 Sonnet: 68%
|
19 |
+
- CodeLlama 13B: 66%
|
20 |
+
- GPT-4o: 62%
|
21 |
+
- CodeLlama 7B: 61%
|
22 |
|
23 |
This model was fine-tuned based on raw data from over 5000 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-7B-QML is not optimised for the creation of Qt5-release compliant, C++, or Python code.
|
24 |
|
|
|
27 |
|
28 |
## Usage:
|
29 |
|
30 |
+
CodeLlama-7B-QML requires significant computing resources to perform with inference (response) times suitable for automatic code completion. Therefore, it should be used with a GPU accelerator, either in the cloud environment such as AWS, Google Cloud, Microsoft Azure, or locally.
|
31 |
|
32 |
Large Language Models, including CodeLlama-7B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.
|
33 |
|
|
|
52 |
|
53 |
#### 4. Build the model in ollama
|
54 |
```
|
55 |
+
ollama create theqtcompany/codellama-7b-qml -f Modelfile
|
56 |
```
|
57 |
The model's name must be exactly as above if one wants to use the model in the Qt Creator
|
58 |
|