SafeMathBot / README.md
lichenglu's picture
Update model card
aa7383d
|
raw
history blame
1.13 kB

Math-RoBerta for NLP tasks in math learning environments

This model is fine-tuned with GPT2-xl with over 3,000,000 posts and replies from students and instructors in Algebra Nation (https://www.mathnation.com/). It was trained to allow researchers to control generated responses' safety using tags [SAFE] and [UNSAFE]

Here is how to use it with texts in HuggingFace

# A list of special tokens the model was trained with
special_tokens_dict = {
    'additional_special_tokens': [
        '[SAFE]','[UNSAFE]', '[OK]', '[SELF_M]','[SELF_F]', '[SELF_N]', 
        '[PARTNER_M]', '[PARTNER_F]', '[PARTNER_N]',
        '[ABOUT_M]', '[ABOUT_F]', '[ABOUT_N]', '<speaker1>', '<speaker2>'
    ],
    'bos_token': '<bos>', 
    'eos_token': '<eos>',
}

from transformers import AutoTokenizer, AutoModelForCausalLM

math_bot_tokenizer = AutoTokenizer.from_pretrained('uf-aice-lab/SafeMathBot')
safe_math_bot = AutoModelForCausalLM.from_pretrained('uf-aice-lab/SafeMathBot')
text = "Replace me by any text you'd like."
encoded_input = math_bot_tokenizer(text, return_tensors='pt')
output = safe_math_bot(**encoded_input)