本文前置知识: BERT: 详见ELMo, GPT, BERT. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations本文是论文AL
2021-06-29
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
UniLM: Unified Language Model Pre-training for Natural Language Understanding and Generation
MASS: Masked Sequence to Sequence Pre - training for Language Generation
SpanBERT: Improving Pre-training by Representing and Predicting Spans
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
BART和mBART
GAKE: Graph Aware Knowledge Embedding
HAKE: Learning Hierarchy-Aware Knowledge Graph Embeddings for Link Prediction
ReInceptionE: Relation-Aware Inception Network with Joint Local-Global Structural Information for KGE
KBAT: Learning Attention-based Embeddings for Relation Prediction in KGs
CompGCN: Composition-based Multi-Relational Graph Convolutional Networks
KEQA: Knowledge Graph Embedding Based Question Answering