DaNing
RoBERTa: A Robustly Optimized BERT Pretraining Approach RoBERTa: A Robustly Optimized BERT Pretraining Approach
本文前置知识: BERT(详见ELMo, GPT, BERT) RoBERTa: A Robustly Optimized BERT Pretraining Approach本文是论文RoBERTa: A Robustly Opti
2020-11-18
Integrating Image-Based and Knowledge-Based Representation Learning Integrating Image-Based and Knowledge-Based Representation Learning
本文前置知识: AlexNet(详见卷积神经网络发展史) Attention(详见Seq2Seq和Attention) TransE(详见TransE: Translating Embeddings for Modeling Multi
2020-11-13
TransE: Translating Embeddings for Modeling Multi-relational Data TransE: Translating Embeddings for Modeling Multi-relational Data
TransE: Translating Embeddings for Modeling Multi-relational Data本文是论文Translating Embeddings for Modeling Multi-relation
2020-11-13
CoKE: Contextualized Knowledge Graph Embedding CoKE: Contextualized Knowledge Graph Embedding
本文前置知识: Self - Attention BERT 2020.11.17: 解决了标签泄露的疑惑. 2021.04.09: 修正可视化实验的描述. CoKE: Contextualized Knowledge Graph E
2020-11-11
ConvKB: A Novel Embedding Model for KB Completion Based on CNN ConvKB: A Novel Embedding Model for KB Completion Based on CNN
本文前置知识: ConvE Conv1d 2021.03.15: 指出权重共享并没有出现在源码中. A Novel Embedding Model for Knowledge Base Completion Based on Con
2020-11-07
InteractE: Improving Convolution-based KGE by Increasing Feature Interactions InteractE: Improving Convolution-based KGE by Increasing Feature Interactions
本文前置知识: ConvE Depth - wise Convolution 2020.11.14: 对实验进行部分补充. InteractE: Improving Convolution-based Knowledge Graph
2020-11-06
CoLAKE: Contextualized Language and Knowledge Embedding CoLAKE: Contextualized Language and Knowledge Embedding
CoLAKE: Contextualized Language and Knowledge Embedding 本文前置知识: BERT Self - Attention 2020.11.11: 想通了CoLAKE在训练时最关键的部分.
2020-10-29
AcrE: Atrous Convolution and Residual Embedding AcrE: Atrous Convolution and Residual Embedding
本文前置知识: 膨胀卷积(空洞卷积) 残差连接 Knowledge Graph Embedding with Atrous Convolution and Residual Learning本文是论文Knowledge Graph
2020-10-27
Transformer-XL与XLNet Transformer-XL与XLNet
本文前置知识: Transformer(Masked Self - Attention和FFN) BERT(与XLNet做对比) Seq2Seq(AutoRegressive & AutoEncoding) 2020.10.2
2020-10-14
ELMo, GPT, BERT ELMo, GPT, BERT
本文的前置知识: RNN Transformer Language Model ELMo, GPT, BERT本文是对ELMo, GPT, BERT三个模型的结构介绍以及个人理解, 多图预警. Introduction由于NLP领域
2020-10-04
Pytorch学习: 张量进阶操作 Pytorch学习: 张量进阶操作
2020.10.03: 因torch版本更新, 对gather描述进行了修正. 2021.03.11: 更新了对gather的描述. Pytorch学习: 张量进阶操作整理内容顺序来自龙龙老师的<深度学习与PyTorch入门实战教
2020-10-03
Pytorch学习: 张量基础操作 Pytorch学习: 张量基础操作
Pytorch学习: 张量基础操作整理内容顺序来自龙龙老师的<深度学习与PyTorch入门实战教程>, 根据个人所需情况进行删减或扩充. 如果想要自己创建新的模块, 这些操作都是基本功, 需要掌握扎实. 张量数据类型下表摘自Py
2020-10-02
7 / 11