--- license: apache-2.0 language: - en - zh base_model: - Qwen/Qwen2.5-Coder-32B-Instruct - Qwen/QwQ-32B-Preview - Qwen/Qwen2.5-Coder-32B pipeline_tag: text-generation tags: - merge new_version: YOYO-AI/QwQ-coder-32B --- # QwQ-Coder-instruct ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e174e202fa032de4143324/qFy5klTtp3EJiNLMNlYq0.png) ## Introduction: Without compromising the long-chain reasoning capabilities of the **QwQ** model, the integration of **Qwen2.5-Coder-32B-instruct** has significantly enhanced the model's **coding abilities** and **instruction-following skills**. Based on my practical tests, the results are exceptionally impressive! ## merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [Qwen/Qwen2.5-Coder-32B](https://huggingface.co/Qwen/Qwen2.5-Coder-32B) as a base. ### Models Merged The following models were included in the merge: * [Qwen/Qwen2.5-Coder-32B-instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-32B-instruct) * [Qwen/QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: sce models: # Pivot model - model: Qwen/Qwen2.5-Coder-32B # Target models - model: Qwen/QwQ-32B-Preview - model: Qwen/Qwen2.5-Coder-32B-instruct base_model: Qwen/Qwen2.5-Coder-32B parameters: select_topk: 1 dtype: bfloat16 normalize: true ```