ChemVLM-8B / README.md
nielsr's picture
nielsr HF Staff
Improve model card: update pipeline tag, add library name and license, and improve content
ffad01b verified
|
raw
history blame
4.12 kB
metadata
pipeline_tag: image-text-to-text
library_name: transformers
license: apache-2.0

ChemVLM-8B: A Multimodal Large Language Model for Chemistry

This is the 8b version of ChemVLM, a multimodal large language model designed for chemical applications.

Paper

ChemVLM: Exploring the Power of Multimodal Large Language Models in Chemistry Area

Abstract

Large Language Models (LLMs) have achieved remarkable success and have been applied across various scientific fields, including chemistry. However, many chemical tasks require the processing of visual information, which cannot be successfully handled by existing chemical LLMs. This brings a growing need for models capable of integrating multimodal information in the chemical domain. In this paper, we introduce ChemVLM, an open-source chemical multimodal large language model specifically designed for chemical applications. ChemVLM is trained on a carefully curated bilingual multimodal dataset that enhances its ability to understand both textual and visual chemical information, including molecular structures, reactions, and chemistry examination questions. We develop three datasets for comprehensive evaluation, tailored to Chemical Optical Character Recognition (OCR), Multimodal Chemical Reasoning (MMCR), and Multimodal Molecule Understanding tasks. We benchmark ChemVLM against a range of open-source and proprietary multimodal large language models on various tasks. Experimental results demonstrate that ChemVLM achieves competitive performance across all evaluated tasks. Our model can be found at https://huggingface.co/AI4Chem/ChemVLM-26B.

Model Description

The architecture of ChemVLM is based on InternVLM and incorporates both vision and language processing components. The model is trained on a bilingual multimodal dataset containing chemical information, including molecular structures, reactions, and chemistry exam questions. More details about the architecture can be found in the Github README.

ChemVLM

Citation

@inproceedings{li2025chemvlm,
  title={Chemvlm: Exploring the power of multimodal large language models in chemistry area},
  author={Li, Junxian and Zhang, Di and Wang, Xunzhi and Hao, Zeying and Lei, Jingdi and Tan, Qian and Zhou, Cai and Liu, Wei and Yang, Yaotian and Xiong, Xinrui and others},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={39},
  number={1},
  pages={415--423},
  year={2025}
}

Codebase and Datasets

Codebase and datasets can be found at https://github.com/AI4Chem/ChemVlm.

Performances of our 8b model on several tasks

Datasets MMChemOCR CMMU MMCR-bench Reaction type
metrics tanimoto similarity\[email protected] score(%, GPT-4o helps judge) score(%, GPT-4o helps judge) Accuracy(%)
scores of ChemVLM-8b 81.75/57.69 52.7(SOTA) 33.6 16.79

Quick Start

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
import torchvision.transforms as T
from torchvision.transforms.functional import InterpolationMode
from PIL import Image

# ... (helper functions from original model card)

tokenizer = AutoTokenizer.from_pretrained('AI4Chem/ChemVLM-8B', trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    "AI4Chem/ChemVLM-8B",
    torch_dtype=torch.bfloat16,
    low_cpu_mem_usage=True,
    trust_remote_code=True
).cuda().eval()

query = "Please describe the molecule in the image."
image_path = "your image path"
pixel_values = load_image(image_path, max_num=6).to(torch.bfloat16).cuda()

gen_kwargs = {"max_length": 1000, "do_sample": True, "temperature": 0.7, "top_p": 0.9}

response = model.chat(tokenizer, pixel_values, query, gen_kwargs)
print(response)

Install required libraries with pip install transformers>=4.37.0 sentencepiece einops timm accelerate>=0.26.0. Make sure to have torch and torchvision installed as well.