CodeLlama-7B-QML / README.md
PeterSchneider's picture
Update README.md
c52e946 verified
|
raw
history blame
3.23 kB
metadata
license: llama2
base_model:
  - meta-llama/CodeLlama-7b-hf
base_model_relation: adapter
tags:
  - QML
  - Code-Completion

Model Overview

Description:

CodeLlama-7B-QML is a large language model customized by the Qt Company for Fill-In-The-Middle code completion tasks in the QML programming language, especially for Qt Quick Controls compliant with Qt 6 releases. The CodeLlama-7B-QML model is designed for software developers who want to run their code completion LLM locally on their computer.

This model reaches a score of 79% on the QML100 Fill-In-the-Middle code completion benchmark for Qt 6-compliant code. In comparison, others models scored:

  • CodeLlama-13B-QML: 79%
  • Claude 3.7 Sonnet: 76%
  • Claude 3.5 Sonnet: 68%
  • CodeLlama 13B: 66%
  • GPT-4o: 62%
  • CodeLlama 7B: 61%

This model was fine-tuned based on raw data from over 5000 human-created QML code snippets using the LoRa fine-tuning method. CodeLlama-7B-QML is not optimised for the creation of Qt5-release compliant, C++, or Python code.

Terms of use:

By accessing this model, you are agreeing to the Llama 2 terms and conditions of the license, acceptable use policy and Meta’s privacy policy. By using this model, you are furthermore agreeing to the Qt AI Model terms & conditions.

Usage:

CodeLlama-7B-QML requires significant computing resources to perform with inference (response) times suitable for automatic code completion. Therefore, it should be used with a GPU accelerator.

Large Language Models, including CodeLlama-7B-QML, are not designed to be deployed in isolation but instead should be deployed as part of an overall AI system with additional safety guardrails as required. Developers are expected to deploy system safeguards when building AI systems.

How to run CodeLlama-7B-QML:

  1. Download and install Ollama from Ollama's web page (if you are not using it yet):
https://ollama.com/download
  1. Run the model with the following command in Ollama's CLI:
ollama run theqtcompany/codellama-7b-qml

Now, you should be able to set and use CodeLlama-7B-QML as LLM for code completions in the Qt AI Assistant. If you want to test the model in Ollama, then you can write curl requests in Ollama's CLI as shown below.

curl -X POST http://localhost:11434/api/generate -d '{
  "model": "theqtcompany/codellama-7b-qml",
  "Prompt": "<SUF>\n    title: qsTr(\"Hello World\")\n}<PRE>import QtQuick\n\nWindow {\n    width: 640\n    height: 480\n    visible: true\n<MID>",
  "stream": false,
  "temperature": 0.2,
  "top_p": 0.9,
  "num_predict": 500,
  "stop": ["<SUF>", "<PRE>", "</PRE>", "</SUF>", "< EOT >", "\\end", "<MID>", "</MID>", "##"]
}'

The prompt format:

"<SUF>{suffix}<PRE>{prefix}<MID>"

If there is no suffix, please use:

"<PRE>{prefix}<MID>"

Model Version:

v1.0

Attribution:

CodeLlama-7B is a model of the Llama 2 family. Llama 2 is licensed under the LLAMA 2 Community License, Copyright (c) Meta Platforms, Inc. All Rights Reserved.