Year
Month
(Preprint) CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained Model
Xin Wang ¹, Yasheng Wang ², Pingyi Zhou ², Meng Xiao ², Yadao Wang ², Li Li ³, Xiao Liu ⁴, Hao Wu 武浩 ⁵, Jin Liu 刘进 ¹, Xin Jiang ²
¹ School of Computer Science, Wuhan University 武汉大学 计算机学院
² Noah's Ark Lab, Huawei 华为 诺亚方舟实验室
³ Faculty of Information Technology, Monash University
⁴ School of Information Technology, Deakin University
⁵ School of Information Science and Engineering, Yunnan University 云南大学 信息学院
arXiv, 2021-08-10
Abstract

Pre-trained models for programming languages have proven their significant values in various code-related tasks, such as code search, code clone detection, and code translation. Currently, most pre-trained models treat a code snippet as a sequence of tokens or only focus on the data flow between code identifiers.

However, rich code syntax and hierarchy are ignored which can provide important structure information and semantic rules of codes to help enhance code representations. In addition, although the BERT-based code pre-trained models achieve high performance on many downstream tasks, the native derived sequence representations of BERT are proven to be of low-quality, it performs poorly on code matching and similarity tasks.

To address these problems, we propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model, to deal with various code intelligence tasks. In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST) and leverage the constrastive learning to learn noise-invariant code representations. Besides the masked language modeling (MLM), we also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens. Through extensive experiments on four code intelligence tasks, we successfully show the effectiveness of our proposed model.
CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained Model_1
CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained Model_2
CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained Model_3
CLSEBERT: Contrastive Learning for Syntax Enhanced Code Pre-Trained Model_4
  • Miniature tunable Airy beam optical meta-device
  • Jing Cheng Zhang, Mu Ku Chen, Yubin Fan, Qinmiao Chen, Shufan Chen, Jin Yao, Xiaoyuan Liu, Shumin Xiao, Din Ping Tsai
  • Opto-Electronic Advances
  • 2024-02-26
  • Data-driven polarimetric imaging: a review
  • Kui Yang, Fei Liu, Shiyang Liang, Meng Xiang, Pingli Han, Jinpeng Liu, Xue Dong, Yi Wei, Bingjian Wang, Koichi Shimizu, Xiaopeng Shao
  • Opto-Electronic Science
  • 2024-02-24
  • Robust measurement of orbital angular momentum of a partially coherent vortex beam under amplitude and phase perturbations
  • Zhao Zhang, Gaoyuan Li, Yonglei Liu, Haiyun Wang, Bernhard J. Hoenders, Chunhao Liang, Yangjian Cai, Jun Zeng
  • Opto-Electronic Science
  • 2024-01-31
  • Deblurring, artifact-free optical coherence tomography with deconvolution-random phase modulation
  • Xin Ge, Si Chen, Kan Lin, Guangming Ni, En Bo, Lulu Wang, Linbo Liu
  • Opto-Electronic Science
  • 2024-01-31
  • Dynamic interactive bitwise meta-holography with ultra-high computational and display frame rates
  • Yuncheng Liu, Ke Xu, Xuhao Fan, Xinger Wang, Xuan Yu, Wei Xiong, Hui Gao
  • Opto-Electronic Advances
  • 2024-01-25
  • Multi-dimensional multiplexing optical secret sharing framework with cascaded liquid crystal holograms
  • Keyao Li, Yiming Wang, Dapu Pi, Baoli Li, Haitao Luan, Xinyuan Fang, Peng Chen, Yanqing Lu, Min Gu
  • Opto-Electronic Advances
  • 2024-01-25
  • Physics-informed deep learning for fringe pattern analysis
  • Wei Yin, Yuxuan Che, Xinsheng Li, Mingyu Li, Yan Hu, Shijie Feng, Edmund Y. Lam, Qian Chen, Chao Zuo
  • Opto-Electronic Advances
  • 2024-01-25
  • Advancing computer-generated holographic display thanks to diffraction model-driven deep nets
  • Vittorio Bianco, Pietro Ferraro
  • Opto-Electronic Advances
  • 2024-01-16
  • Inverse design for material anisotropy and its application for a compact X-cut TFLN on-chip wavelength demultiplexer
  • Jiangbo Lyu, Tao Zhu, Yan Zhou, Zhenmin Chen, Yazhi Pi, Zhengtong Liu, Xiaochuan Xu, Ke Xu, Xu Ma, Lei Wang, Zizheng Cao, Shaohua Yu
  • Opto-Electronic Science
  • 2024-01-09
  • Improved spatiotemporal resolution of anti-scattering super-resolution label-free microscopy via synthetic wave 3D metalens imaging
  • Yuting Xiao, Lianwei Chen, Mingbo Pu, Mingfeng Xu, Qi Zhang, Yinghui Guo, Tianqu Chen, Xiangang Luo
  • Opto-Electronic Science
  • 2024-01-05
  • Wide-spectrum optical synthetic aperture imaging via spatial intensity interferometry
  • Chunyan Chu, Zhentao Liu, Mingliang Chen, Xuehui Shao, Guohai Situ, Yuejin Zhao, Shensheng Han
  • Opto-Electronic Advances
  • 2023-3-10
  • Flat soliton microcomb source
  • Xinyu Wang, Xuke Qiu, Mulong Liu, Feng Liu, Mengmeng Li, Linpei Xue, Bohan Chen, Mingran Zhang, Peng Xie
  • Opto-Electronic Science
  • 2023-12-29



  • Grassland: A Rapid Algebraic Modeling System for Million-variable Optimization                                China's Technology Cooperation with Russia: Geopolitics, Economics, and Regime Security
    About
    |
    Contact
    |
    Copyright © PubCard