DaNing
CasRel: A Novel Cascade Binary Tagging Framework for Relational Triple Extraction CasRel: A Novel Cascade Binary Tagging Framework for Relational Triple Extraction
A Novel Cascade Binary Tagging Framework for Relational Triple Extraction本文是论文A Novel Cascade Binary Tagging Framework f
2021-10-04
A Unified MRC Framework for Named Entity Recognition A Unified MRC Framework for Named Entity Recognition
本文前置知识: BERT: 详见ELMo, GPT, BERT. A Unified MRC Framework for Named Entity Recognition本文是论文A Unified MRC Framework fo
2021-09-30
HypE: 知识图谱超图补全 HypE: 知识图谱超图补全
Knowledge Hypergraphs: Prediction Beyond Binary Relations本文是论Knowledge Hypergraphs: Prediction Beyond Binary Relations的阅
2021-09-20
Pytorch实现: VAE Pytorch实现: VAE
本文前置知识: VAE基本原理: 详见变分自编码器入门. Pytorch实现: VAE本文是VAE的Pytorch版本实现, 并在末尾做了VAE的生成可视化. 本文的代码已经放到了Colab上, 打开设置GPU就可以复现(需要科学上
2021-07-10
Introduction: Variational Auto - Encoder Introduction: Variational Auto - Encoder
Introduction: Variational Auto - Encoder变分自动编码器(VAE, Variational Auto - Encoder)是一种基于自编码器结构的深度生成模型. 本文对VAE更深层次的数学原理没有探讨,
2021-07-09
知识蒸馏: Distilling the Knowledge in a Neural Network 知识蒸馏: Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network本文是论文Distilling the Knowledge in a Neural Network的阅读笔记和个人理解. Basic Idea现有机器学
2021-07-03
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
本文前置知识: BERT: 详见ELMo, GPT, BERT. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations本文是论文AL
2021-06-29
UniLM: Unified Language Model Pre-training for Natural Language Understanding and Generation UniLM: Unified Language Model Pre-training for Natural Language Understanding and Generation
本文前置知识: BERT: 详见ELMo, GPT, BERT. UniLM: Unified Language Model Pre-training for Natural Language Understanding and G
2021-06-18
MASS: Masked Sequence to Sequence Pre - training for Language Generation MASS: Masked Sequence to Sequence Pre - training for Language Generation
本文前置知识; BERT: 详见ELMo, GPT, BERT. Transformer: 详见Transformer精讲. MASS: Masked Sequence to Sequence Pre-training for La
2021-06-08
SpanBERT: Improving Pre-training by Representing and Predicting Spans SpanBERT: Improving Pre-training by Representing and Predicting Spans
本文前置知识: BERT: 详见ELMo, GPT, BERT. SpanBERT: Improving Pre-training by Representing and Predicting Spans本文是论文SpanBERT:
2021-05-13
StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding StructBERT: Incorporating Language Structures into Pre-training for Deep Language Understanding
本文前置知识: BERT: 详见ELMo, GPT, BERT. StructBERT: Incorporating Language Structures into Pre-training for Deep Language U
2021-05-04
BART和mBART BART和mBART
本文前置知识: Transformer: 详见Transformer精讲. BERT, GPT: 详见ELMo, GPT, BERT. BART和mBART本文是如下论文的阅读笔记和个人理解: BART: Denoising
2021-04-26
3 / 10