本文前置知识: BERT(详见ELMo, GPT, BERT) RoBERTa: A Robustly Optimized BERT Pretraining Approach本文是论文RoBERTa: A Robustly Opti
2020-11-18
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Integrating Image-Based and Knowledge-Based Representation Learning
TransE: Translating Embeddings for Modeling Multi-relational Data
CoKE: Contextualized Knowledge Graph Embedding
ConvKB: A Novel Embedding Model for KB Completion Based on CNN
InteractE: Improving Convolution-based KGE by Increasing Feature Interactions
CoLAKE: Contextualized Language and Knowledge Embedding
AcrE: Atrous Convolution and Residual Embedding
Transformer-XL与XLNet
ELMo, GPT, BERT
Pytorch学习: 张量进阶操作
Pytorch学习: 张量基础操作