Upload README.md (#5)
Browse files- Upload README.md (4618770209d7acf427b34c5ae59da6a46e2ed5d9)
Co-authored-by: qt-spyro-hf <[email protected]>
README.md
CHANGED
@@ -16,13 +16,57 @@ CodeLlama-13B-QML is a large language model customized by the Qt Company for Fil
|
|
16 |
## Terms of use:
|
17 |
By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions).
|
18 |
|
|
|
|
|
19 |
|
20 |
-
##
|
21 |
-
|
|
|
22 |
|
23 |
-
|
24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
|
|
26 |
|
27 |
## Model Version:
|
28 |
v1.0
|
|
|
16 |
## Terms of use:
|
17 |
By accessing this model, you are agreeing to the Llama 2 terms and conditions of the [license](https://github.com/meta-llama/llama/blob/main/LICENSE), [acceptable use policy](https://github.com/meta-llama/llama/blob/main/USE_POLICY.md) and [Meta’s privacy policy](https://www.facebook.com/privacy/policy/). By using this model, you are furthermore agreeing to [Qt AI Model terms & conditions](https://www.qt.io/terms-conditions).
|
18 |
|
19 |
+
## Usage:
|
20 |
+
Large language models, including CodeLlama-13B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.
|
21 |
|
22 |
+
## How to run in ollama
|
23 |
+
#### Install the ollama
|
24 |
+
https://ollama.com/download
|
25 |
|
26 |
+
instructions written with ollama version 0.5.7
|
27 |
+
|
28 |
+
#### Download model repository
|
29 |
+
|
30 |
+
#### Open terminal and go to the repository
|
31 |
+
|
32 |
+
#### Build model in ollama
|
33 |
+
```
|
34 |
+
ollama create <your-model-name> -f Modelfile
|
35 |
+
e.g. ollama create customcodellama13bqml -f Modelfile
|
36 |
+
```
|
37 |
+
|
38 |
+
#### Run the model
|
39 |
+
```
|
40 |
+
ollama run <your-model-name>
|
41 |
+
e.g. ollama run customcodellame13bqml
|
42 |
+
```
|
43 |
+
You can start writing in the terminal or send curl requests
|
44 |
+
|
45 |
+
Curl request example:
|
46 |
+
```
|
47 |
+
curl -X POST http://localhost:11434/api/generate -d '{
|
48 |
+
"model": "codellama13bqml",
|
49 |
+
"Prompt": "<SUF>\n title: qsTr(\"Hello World\")\n}<PRE>import QtQuick\n\nWindow {\n width: 640\n height: 480\n visible: true\n<MID>",
|
50 |
+
"stream": false,
|
51 |
+
"temperature": 0.4,
|
52 |
+
"top_p": 0.9,
|
53 |
+
"repeat_penalty": 1.1,
|
54 |
+
"num_predict": 300,
|
55 |
+
"stop": ["<SUF>", "<PRE>", "</PRE>", "</SUF>", "< EOT >", "\\end", "<MID>", "</MID>", "##"]
|
56 |
+
}'
|
57 |
+
```
|
58 |
+
|
59 |
+
#### The prompt format:
|
60 |
+
```
|
61 |
+
"<SUF>{suffix}<PRE>{prefix}<MID>"
|
62 |
+
```
|
63 |
+
|
64 |
+
If there is no suffix, please use:
|
65 |
+
```
|
66 |
+
"<PRE>{prefix}<MID>"
|
67 |
+
```
|
68 |
|
69 |
+
#### Happy coding!
|
70 |
|
71 |
## Model Version:
|
72 |
v1.0
|