PURE: A Frustratingly Easy Approach for Entity and Relation ExtractionA Frustratingly Easy Approach for Entity and Relation Extraction本文是论文A Frustratingly Easy Approach for Entity and Relati2021-12-01 深度学习ERE Two are Better than One: Joint Entity and Relation Extraction with Table - Sequence EncodersTwo are Better than One: Joint Entity and Relation Extraction with Table - Sequence Encoders本文是论文Two are Better than One2021-11-16 深度学习ERE TPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair LinkingTPLinker: Single-stage Joint Extraction of Entities and Relations Through Token Pair Linking本文是论文TPLinker: Single-stage2021-10-22 深度学习RTE CasRel: A Novel Cascade Binary Tagging Framework for Relational Triple ExtractionA Novel Cascade Binary Tagging Framework for Relational Triple Extraction本文是论文A Novel Cascade Binary Tagging Framework f2021-10-04 深度学习RTE A Unified MRC Framework for Named Entity Recognition本文前置知识: BERT: 详见ELMo, GPT, BERT. A Unified MRC Framework for Named Entity Recognition本文是论文A Unified MRC Framework fo2021-09-30 深度学习NER HypE: 知识图谱超图补全Knowledge Hypergraphs: Prediction Beyond Binary Relations本文是论Knowledge Hypergraphs: Prediction Beyond Binary Relations的阅2021-09-20 深度学习KGE Pytorch实现: VAE本文前置知识: VAE基本原理: 详见变分自编码器入门. Pytorch实现: VAE本文是VAE的Pytorch版本实现, 并在末尾做了VAE的生成可视化. 本文的代码已经放到了Colab上, 打开设置GPU就可以复现(需要科学上2021-07-10 深度学习VAE Pytorch Introduction: Variational Auto - EncoderIntroduction: Variational Auto - Encoder变分自动编码器(VAE, Variational Auto - Encoder)是一种基于自编码器结构的深度生成模型. 本文对VAE更深层次的数学原理没有探讨,2021-07-09 深度学习VAE 知识蒸馏: Distilling the Knowledge in a Neural NetworkDistilling the Knowledge in a Neural Network本文是论文Distilling the Knowledge in a Neural Network的阅读笔记和个人理解. Basic Idea现有机器学2021-07-03 深度学习KD ALBERT: A Lite BERT for Self-supervised Learning of Language Representations本文前置知识: BERT: 详见ELMo, GPT, BERT. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations本文是论文AL2021-06-29 深度学习NLP BERT UniLM: Unified Language Model Pre-training for Natural Language Understanding and Generation本文前置知识: BERT: 详见ELMo, GPT, BERT. UniLM: Unified Language Model Pre-training for Natural Language Understanding and G2021-06-18 深度学习NLP BERT MASS: Masked Sequence to Sequence Pre - training for Language Generation本文前置知识; BERT: 详见ELMo, GPT, BERT. Transformer: 详见Transformer精讲. MASS: Masked Sequence to Sequence Pre-training for La2021-06-08 深度学习NLP BERT