Can We Predict New Facts with Open Knowledge Graph Embeddings?Can We Predict New Facts with Open Knowledge Graph Embeddings? A Benchmark for Open Link Prediction本文是论文Can We Predict N2020-12-13 知识图谱KGE OKG
Generative Adversarial Zero-Shot Relational Learning for KGsGenerative Adversarial Zero-Shot Relational Learning for Knowledge Graphs本文是论文Generative Adversarial Zero-Shot Relationa2020-12-11 知识图谱KGE GAN ZSL
R-MeN: A Relational Memory-based Embedding Model本文前置知识: Self - Attention: 详见Transformer精讲. 2020.12.14: 修正错误. A Relational Memory-based Embedding Model for Triple Cl2020-12-10 知识图谱KGE Attention 记忆网络
基于轻量级卷积和动态卷积替代的注意力机制本文前置知识: Depthwise Convolution: 详见深度可分离卷积与分组卷积. Attention: 详见Seq2Seq和Attention. Transformer: 详见Transformer精讲. 本文是论文PA2020-12-05 深度学习CNN Attention
KG-BERT: BERT for Knowledge Graph Completion本文前置知识: BERT: 详见ELMo, GPT, BERT. 本文是论文KG-BERT: BERT for Knowledge Graph Completion的阅读笔记和个人理解. Basic Idea在先前的KGE方法中,2020-11-28 知识图谱BERT KGE
ConvE: Convolutional 2D Knowledge Graph Embeddings本文前置知识: CNN 本文是论文Convolutional 2D Knowledge Graph Embeddings的阅读笔记和个人理解. 与之前在AcrE中提到的ConvE不同, 本文重新对整篇论文进行叙述, 而非仅介绍论文中2020-11-27 知识图谱KGE CNN
深度可分离卷积与分组卷积本文前置知识: CNN: 详见卷积神经网络小结. 本文着重介绍深度可分离卷积和分组卷积两种操作. 深度可分离卷积深度可分离卷积(Depthwise Separable Convolution)应用在MobileNet和Xceptio2020-11-26 深度学习CNN
Pytorch实现: Transformer本文前置知识: Pytorch基本操作 Transformer: 详见Transformer精讲 2022.04.03: 去掉了Pre Norm比Post Norm效果好的表述. Pytorch实现: Transformer本文是T2020-11-23 深度学习NLP Transformer Pytorch
KEPLER: Knowledge Embedding and Pre-trained Language Representation本文前置知识: BERT(详见ELMo, GPT, BERT) KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representat2020-11-21 知识图谱NLP BERT KGE
KGE预警论文两则本文是两篇KGE方向的预警论文的阅读笔记和个人理解. 预警类的工作其实是比较少见的, 对领域的发展也非常有指导意义. 2020.11.22: 更新Reciprocal Relation. 2021.05.13: 修正Reciprocal2020-11-20 知识图谱KGE
Pytorch实现: Skip-Gram本文前置知识: Pytorch基本操作 Word2Vec Pytorch实现: Skip-Gram本文用Pytorch实现了Skip - Gram, 它是Word2Vec的其中一种. 本文实现参考PyTorch 实现 Word2Ve2020-11-19 深度学习Pytorch Word2Vec
RoBERTa: A Robustly Optimized BERT Pretraining Approach本文前置知识: BERT(详见ELMo, GPT, BERT) RoBERTa: A Robustly Optimized BERT Pretraining Approach本文是论文RoBERTa: A Robustly Opti2020-11-18 深度学习NLP BERT