简介 Brief Introduction


ZEN1 model, which uses N-gram to enhance text semantic and has 224M parameters, is adept at NLU tasks.

模型分类 Model Taxonomy

需求 Demand 任务 Task 系列 Series 模型 Model 参数 Parameter 额外 Extra
通用 General 自然语言理解 NLU 二郎神 Erlangshen ZEN1 224M 中文-Chinese

模型信息 Model Information


We open source and publicly release ZEN1 using our Fengshen Framework in collaboration with the ZEN team. More precisely, by bringing together knowledge extracted by unsupervised learning, ZEN learns different textual granularity information through N-gram methods. ZEN1 can obtain good performance gains by training only on a single small corpus (low-resource scenarios). In the next step, we continue with the ZEN team to explore the optimization of PLM and improve the performance on downstream tasks.

下游效果 Performance

分类任务 Classification

model dataset Acc
IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese Tnews 56.82%

抽取任务 Extraction

model dataset F1
IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese OntoNote4.0 80.8%

使用 Usage

模型下载地址 Download Address


加载模型 Loading Models


Since there is no structure of ZEN1 in transformers library, you can find the structure of ZEN1 and run the codes in Fengshenbang-LM.

git clone
from fengshen.models.zen1.ngram_utils import ZenNgramDict
from fengshen.models.zen1.tokenization import BertTokenizer
from fengshen.models.zen1.modeling import ZenForSequenceClassification, ZenForTokenClassification

pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese'

tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model_classification = ZenForSequenceClassification.from_pretrained(pretrain_path)
model_extraction = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)


You can get classification and extraction examples below.

分类 classification example on fengshen

抽取 extraction example on fengshen

引用 Citation


If you are using the resource for your work, please cite the our paper for this model:

  author    = {Shizhe Diao and
               Jiaxin Bai and
               Yan Song and
               Tong Zhang and
               Yonggang Wang},
  title     = {{ZEN:} Pre-training Chinese Text Encoder Enhanced by N-gram Representations},
  booktitle = {{EMNLP} (Findings)},
  series    = {Findings of {ACL}},
  volume    = {{EMNLP} 2020},
  pages     = {4729--4740},
  publisher = {Association for Computational Linguistics},
  year      = {2020}


If you are using the resource for your work, please cite the our overview paper:

  author    = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
  title     = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
  journal   = {CoRR},
  volume    = {abs/2209.02970},
  year      = {2022}


You can also cite our website: