Model Overview
Description:
CodeLlama-7B-QML is a fine-tuned model for code completion tasks in Qt's Markup Language (QML). The CodeLlama-7B-QML model is designed for software developers who want to run their code completion LLM locally on their computer.
This model reaches a score of 79% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6-compliant code. In comparison, other models scored:
- CodeLlama-13B-QML: 86%
- Claude 3.7 Sonnet: 76%
- Claude 3.5 Sonnet: 68%
- CodeLlama 13B: 66%
- GPT-4o: 62%
- CodeLlama 7B: 61%
This model was fine-tuned based on raw data from over 5000 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-7B-QML is not optimized for the creation of Qt5-release compliant, C++, or Python code.
Terms of use:
By accessing this model, you are agreeing to the Llama 2 terms and conditions of the license, acceptable use policy and Meta’s privacy policy. By using this model, you are furthermore agreeing to the Qt AI Model terms & conditions.
Usage:
CodeLlama-7B-QML requires significant computing resources to perform with inference (response) times suitable for automatic code completion. Therefore, it should be used with a GPU accelerator.
Large Language Models, including CodeLlama-7B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.
How to run CodeLlama-7B-QML:
We have preloaded the model to Ollama for your convenience.
- Download and install Ollama from Ollama's web page (if you are not using it yet):
https://ollama.com/download
- Run the model with the following command in Ollama's CLI:
ollama run theqtcompany/codellama-7b-qml
Now, you can set and use CodeLlama-7B-QML as an LLM for code completions in the Qt AI Assistant or other coding assistants. If you want to test the model in Ollama, then you can write curl requests in Ollama's CLI, as shown below.
curl -X POST http://localhost:11434/api/generate -d '{
"model": "theqtcompany/codellama-7b-qml",
"Prompt": "<SUF>\n title: qsTr(\"Hello World\")\n}<PRE>import QtQuick\n\nWindow {\n width: 640\n height: 480\n visible: true\n<MID>",
"stream": false,
"temperature": 0.2,
"top_p": 0.9,
"num_predict": 500,
"stop": ["<SUF>", "<PRE>", "</PRE>", "</SUF>", "< EOT >", "\\end", "<MID>", "</MID>", "##"]
}'
In general, the prompt format for CodeLlama-7B-QML is:
"<SUF>{suffix}<PRE>{prefix}<MID>"
If there is no suffix, please use:
"<PRE>{prefix}<MID>"
Modify and Adapt CodeLlama-7B-QML
The HuggingFace repository contains all necessary components including the .safetensors files and tokenizer configurations, giving you everything needed to modify the model across various environments and better suit your specific requirements or train it on your custom dataset.
Model Version:
v1.0
Attribution:
CodeLlama-7B is a model of the Llama 2 family. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.
- Downloads last month
- 17
Model tree for QtGroup/CodeLlama-7B-QML
Base model
meta-llama/CodeLlama-7b-hf