further shown that pre-trained contextual embeddings can be extremely powerful embedding of source code by training a BERT model on source code.
Q1: How do contextual embeddings compare against word embeddings? CuBERT outperforms BiLSTM models initialized with pre-trained source-code-specific
A significant ad- vancement in natural-language understanding has come with the development of pre-trained con- textual embeddings such as BERT
25 août 2021 In order to determine whether the pre- trained vector embeddings of source code transformer models reflect code understanding in terms of ...
14 fév. 2022 Recently many pre-trained language models for source code have ... is embedded in the linear-transformed contextual word em-.
12 déc. 2020 open source trained bilingual contextual word embedding models of FLAIR ... (2017) implemented Arabic pre-trained word embedding models.
29 déc. 2020 [24] extended this idea to programming language understanding tasks. They derived contextual embedding of source code by training a BERT model ...