metadata
library_name: peft
base_model: TheBloke/Llama-2-7b-Chat-GPTQ
pipeline_tag: text-generation
inference: false
license: openrail
language:
- en
datasets:
- flytech/python-codes-25k
tags:
- text2code
- LoRA
- GPTQ
- Llama-2-7B-Chat
- text2python
- instruction2code
Llama-2-7b-Chat-GPTQ fine-tuned on PYTHON-CODES-25K
Generate Python code that accomplishes the task instructed.
LoRA Adpater Head
Description
Parameter Efficient Finetuning(PEFT) a 4bit quantized Llama-2-7b-Chat from TheBloke/Llama-2-7b-Chat-GPTQ on flytech/python-codes-25k dataset.
- Model type:
- Language(s) (NLP): English
- License: openrail
- Finetuned from model facebook/bart-large-cnn
- Dataset: gretelai/synthetic_text_to_sql
Intended uses & limitations
Addressing the power of LLM in fintuned downstream task. Implemented as a personal Project.
How to use
query_question_with_context = """sql_prompt: Which economic diversification efforts in
the 'diversification' table have a higher budget than the average budget for all economic diversification efforts in the 'budget' table?
sql_context: CREATE TABLE diversification (id INT, effort VARCHAR(50), budget FLOAT); CREATE TABLE
budget (diversification_id INT, diversification_effort VARCHAR(50), amount FLOAT);"""
Use a pipeline as a high-level helper
from transformers import pipeline
sql_generator = pipeline("text2text-generation", model="SwastikM/bart-large-nl2sql")
sql = sql_generator(query_question_with_context)[0]['generated_text']
print(sql)
Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("SwastikM/bart-large-nl2sql")
model = AutoModelForSeq2SeqLM.from_pretrained("SwastikM/bart-large-nl2sql")
inputs = tokenizer(query_question_with_context, return_tensors="pt").input_ids
outputs = model.generate(inputs, max_new_tokens=100, do_sample=False)
sql = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(sql)
Training Details
Training Data
gretelai/synthetic_text_to_sql
Training Procedure
HuggingFace Accelerate with Training Loop.
Preprocessing
- Encoder Input: "sql_prompt: " + data['sql_prompt']+" sql_context: "+data['sql_context']
- Decoder Input: data['sql']
Training Hyperparameters
- Optimizer: AdamW
- lr: 2e-5
- decay: linear
- num_warmup_steps: 0
- batch_size: 8
- num_training_steps: 12500
Hardware
- GPU: P100
Citing Dataset and BaseModel
@software{gretel-synthetic-text-to-sql-2024,
author = {Meyer, Yev and Emadi, Marjan and Nathawani, Dhruv and Ramaswamy, Lipika and Boyd, Kendrick and Van Segbroeck, Maarten and Grossman, Matthew and Mlocek, Piotr and Newberry, Drew},
title = {{Synthetic-Text-To-SQL}: A synthetic dataset for training language models to generate SQL queries from natural language prompts},
month = {April},
year = {2024},
url = {https://huggingface.co/datasets/gretelai/synthetic-text-to-sql}
}
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
Additional Information
- Github: Repository
Acknowledgment
Thanks to @AI at Meta for adding the Pre Trained Model. Thanks to @Gretel.ai for adding the datset.
Model Card Authors
Swastik Maiti