Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ Large Language Models, including CodeLlama-7B-QML, are not designed to be deploy
|
|
33 |
|
34 |
## How to run CodeLlama-7B-QML:
|
35 |
|
36 |
-
1.
|
37 |
```
|
38 |
https://ollama.com/download
|
39 |
```
|
@@ -42,9 +42,8 @@ https://ollama.com/download
|
|
42 |
ollama run theqtcompany/codellama-7b-qml
|
43 |
```
|
44 |
|
45 |
-
|
46 |
|
47 |
-
Here is a curl request example:
|
48 |
```
|
49 |
curl -X POST http://localhost:11434/api/generate -d '{
|
50 |
"model": "theqtcompany/codellama-7b-qml",
|
|
|
33 |
|
34 |
## How to run CodeLlama-7B-QML:
|
35 |
|
36 |
+
1. Download and install Ollama from Ollama's web page (if you are not using it yet):
|
37 |
```
|
38 |
https://ollama.com/download
|
39 |
```
|
|
|
42 |
ollama run theqtcompany/codellama-7b-qml
|
43 |
```
|
44 |
|
45 |
+
Now, you should be able to set and use CodeLlama-7B-QML as LLM for code completions in the Qt AI Assistant. If you want to test the model in Ollama, then you can write curl requests in Ollama's CLI as shown below.
|
46 |
|
|
|
47 |
```
|
48 |
curl -X POST http://localhost:11434/api/generate -d '{
|
49 |
"model": "theqtcompany/codellama-7b-qml",
|