Citation: | YANG Ning, QIAN Ye, CHEN Jian. Research on named entity recognition for combine harvester fault domain[J]. Journal of Chinese Agricultural Mechanization, 2024, 45(8): 338-343. DOI: 10.13733/j.jcam.issn.2095-5553.2024.08.049 |
[1] |
Hammerton J. Named entity recognition with long short-term memory[C]. Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, 2003:172-175.
|
[2] |
Collobert R, Weston J, Bottou L, et al. Natural language processing(almost)from scratch[J]. Journal of Machine Learning Research, 2011, 12:2493-2537.
|
[3] |
Huang Z, Xu W, Yu K. Bidirectional LSTM-CRF models for sequence tagging[J]. arxiv preprint arxiv:1508. 01991, 2015.
|
[4] |
Dong C, Zhang J, Zong C, et al. Character-based LSTMCRF with radical-level features for Chinese named entity recognition[C]. Natural Language Understanding and Intelligent Applications:5th CCF Conference on Natural Language Processing and Chinese Computing, NLPCC 2016,and 24th International Conference on Computer Processing of Oriental Languages, ICCPOL 2016, Kunming, China,December 2-6, 2016, Proceedings 24. Springer International Publishing, 2016:239-250.
|
[5] |
Ma R, Peng M, Zhang Q, et al. Simplify the usage of lexicon in Chinese NER[J]. arxiv preprint arxiv:1908.05969, 2019.
|
[6] |
Zhou J T, Zhang H, Jin D, et al. Dual adversarial neural transfer for low-resource named entity recognition[C].Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019:3461-3471.
|
[7] |
Devlin J, Chang M W, Lee K, et al. Bert:Pre-training of deep bidirectional transformers for language understanding[J].arxiv preprint arxiv:1810. 04805, 2018.
|
[8] |
Cui Y, Che W, Liu T, et al. Pre-training with whole word masking for chinese bert[J]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2021, 29:3504-3514.
|
[9] |
Yan H, Deng B, Li X, et al. TENER:Adapting transformer encoder for named entity recognition[J]. arxiv preprint arxiv:1911. 04474, 2019.
|
[10] |
Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks[J]. arxiv preprint arxiv:1706. 06083, 2017.
|