MDDDDR's picture
Update README.md
dc49760 verified
|
raw
history blame
1.29 kB
metadata
datasets:
  - pubmed
language:
  - en
tags:
  - BERT

Model Card for Model ID

base_model : google-bert/bert-large-uncased

hidden_size : 1024

max_position_embeddings : 512

num_attention_heads : 16

num_hidden_layers : 24

vocab_size : 30522

Basic usage

from transformers import AutoTokenizer, AutoModelForTokenClassification
import numpy as np

# match tag
id2tag = {0:'O', 1:'B_MT', 2:'I_MT'}

# load model & tokenizer
MODEL_NAME = 'MDDDDR/bert_large_uncased_NER'

model = AutoModelForTokenClassification.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)

# prepare input
text = 'mental disorder can also contribute to the development of diabetes through various mechanism including increased stress, poor self care behavior, and adverse effect on glucose metabolism.'
tokenized = tokenizer(text, return_tensors='pt')

# forward pass
output = model(**tokenized)

# result
logits = output['logits']
logits = logits.detach().cpu().numpy()

# remove start & end tag
pred = [p for p in np.argmax(logits, axis=2)][0][1:-1]

# Check pred
for txt, pred in zip(tokenizer.tokenize(text), pred):
    print("{}\t{}".format(id2tag[pred], txt))
    # B_MT mental 
    # B_MT disorder