1.“Multifaceted protein–protein interaction prediction based on Siamese residual RCNN”

1.1PPI任务的难点:

(1)蛋白质的表征需要一个模型来有效地过滤和聚合它们的局部特征,同时保留重要的上下文和序列的氨基酸信息
(2)扩展深度神经结构经常导致低效的学习过程,并遭受臭名昭著的消失梯度问题
(iii)还需要一个有效的机制来理解蛋白质对在PPI预测中的相互影响。此外,框架必须具有大数据的可伸缩性(我们的任务要求用在多长的数据上?)可推广到不同的预测任务。

1.2 作者对于自己工作的概括

(1)训练端到端网络PIPR,从而减少了用户数据预处理的工作量。
PIPR requires only the primary protein sequences as the input, and is trained to automatically preserve the critical features from the sequences.
补充:

(2)强调了在PPI任务中考虑上下文化和顺序信息的需求。(也就是说序列信息和局部信息都很重要,可是现在我们思考的模型中还没有加入局部信息)
(3)Third, the architecture of PIPR can be flexibly used to address different PPI tasks
(4)这个工作中也预测了亲和度!且表现很良好,可以对细微变化做出反应。

1.3PPI任务相较于NLP任务的不同之处

(1)序列
In contrast to sentences, proteins are profiled in sequences with more intractable patterns, as well as in a drastically larger range of lengths.
(2)Precisely capturing the PPI requires much more comprehensive learning architectures to distill the latent information from the entire sequences, and to preserve the long-term ordering information.

1.4 处理PPI任务上,基于深度学习的方法的发展:

(1)第一项工作是基于深层CNN
One recent work (Hashemifar et al., 2018), DPPI, uses a deep CNN-based architecture which focuses on capturing local features from protein profiles. DPPI represents the first work to deploy deep learning to PPI prediction, and has achieved the state-of-the-art performance on the binary prediction task. However, it requires excessive efforts for data pre-processing such as constructing protein profiles by PSI-BLAST (Altschul et al., 1997), and does not incorp-
orate a neural learning architecture that captures the important contextualized and sequential features.

(2)DNN-PPI (Li et al., 2018) represents another relevant work of this line, which deploys a different learning structure with two separated CNN encoders. However, DNN-PPI does not incorporate physicochemical properties into amino acid representations, and does not employ a Siamese learning architecture to fully characterize pairwise relations of sequences.

1.5 方法介绍

(1)通过预训练氨基酸的嵌入表示

(这个是不是可以用在我们的任务中?)
我觉得是可以的,引入如果不用预训练的embedding ,需要使用PSI-BLAST构造protein profiles,非常麻烦和耗时。
而且

Each embedding vector is a concatenation of two sub-embeddings, i.e.
(1)The first part ac measures the co-occurrence similarity of the amino acids, which is obtained by pre-training the Skip-Gram model
(2)The second part aph represents the similarity of electrostaticity and hydrophobicity among amino acids. The 20 amino(哦一共有20种氨基酸!!) acids can be clustered into 7 classes based on their dipoles and volumes of the side chains to reflect this property. Thus, aphis a one-hot encoding based on the classification defined by Shen et al. (2007).

(2)RCNN

①CNN层:
最大池化discretize the convolution results, and preserve the most significant features within each n-stride. By definition, this mechanism divides the size of processed features by n. The outputs from the max-pooling are fed into the bidirectional gated recurrent units in our RCNN encoder.
②残差GRU层
一个疑问:残差机制是否也该应用在我们的工作中?
“In our development, we have found that the residual mechanism is able to drastically simplify the training process, and largely decreases the epochs of parameter updates for the model to converge.”

将上面的unit堆叠多次,将最后一层GRU的输出再经过一个CNN层和池化层得到最终的 high-level sequence embedding of the entire protein sequence

(3)孪生网络结构

(4)损失函数
①分类问题使用了Cross-entropy loss。

疑问:哪里用到了MLP??不是直接用的两个蛋白质序列的embedding做乘法吗?
②回归问题使用Mean squared loss

1.6 数据集

(1)

疑问:如果我们想与其他方法做比较是不是也需要用the Yeast dataset?
(2)

1.7 实验细节



疑问:它使用了交叉验证(而且以前很多工作也是这样做的),如果数据集不够大,我们是否也需要用?
补充交叉验证;

2.Sequence-based prediction of protein-protein interactions: a structure-aware interpretable deep learning mode

2.1与我们的任务比较相似的地方:

Our key conceptual advance is that a well-matched combination of input featurization and model architecture allow for the model to be trained solely from sequence data, supervised only with a binary interaction label, and yet produce an intermediate representation that substantially captures the structural mechanism of interaction between the protein pair.

Using Bepler and Berger’s [12] pre-trained language model, we construct informative protein embeddings(需要重点看) that are endowed with structural information about each of the proteins. The internal representation of our model uses these features to explicitly encode the intuition that a physical interaction between two proteins requires that a subset of the residues in each protein be in contact with the other protein

We note that the use of Bepler and Berger’s pre-trained model allows us to indirectly benefit from the rich data on 3-D structures of individual proteins. In contrast, a PPI prediction method that was directly supervised with 3-D structures of protein complexes, in order to
learn the physical mechanism of interaction, would need to contend with the relatively small size of that corpus [14–16].
疑问:我们能否使用说到的这个预训练模型?

D-SCRIPT, like other recent successful deep learning methods PIPR and DPPI [17, 20], belongs to the class of methods that perform PPI prediction from protein amino acid sequence alone, in contrast to a different class of highly successful PPI prediction methods based on network information

2.2 这个模型的效果:

(1)它的优势主要体现于跨物种,或者说训练集中比较少出现的PPI
We find, as expected, that state-of-the-art PIPR substantially outperforms D-SCRIPT
when predicting interactions between proteins that have many PPI examples in the training set, but the situation is reversed for proteins with a paucity of PPI interactions in the training set. A simple hybrid method that jointly incorporates the confidence of each method performs best of all.

Among sequence-based methods, D-SCRIPT’s strength is in its greater cross-species generalizability and more accurate predictions
in cases where the existing training data is sparse.
(2)On evaluating the physical plausibility of the intermediate contact map representation, we remarkably find that the map partially discovers the structural mechanism of an interaction despite the model having been trained only on sequence data.

2.3 任务定义(模型输入输出)

2.4 方法介绍


这篇文章的创新之处还在于 其得到第一阶段的embeddings后,在第二阶段首先预测两个蛋白质序列各氨基酸之间的交互情况

(1) 得到蛋白质的embeddings
使用了” Bepler, T. & Berger, B. Learning protein sequence embeddings using information from structure. In 7th International Conference on Learning Representations, ICLR 2019 (2019)”中的方法。这是一个基于Bi-LSTM的预训练模型。
作者把这种embeddings与PIPR中的embeddings进行了比较,而且还提供了两种其他embeddinging方法:

(2)将两个蛋白质序列转化为相同维度

(3)Residue Contact Module
假设蛋白质A的长度是m,蛋白质B的长度为n。那么这一部分最后得到的是一个m*n的矩阵,其中每一个值在0,1之间,代表两个蛋白质的每个氨基酸之间分别可能contact的概率。

(4)Interaction Prediction Module
从4.3中得到的contact矩阵中计算最后的两个蛋白质序列可能组成复合物的概率p

2.5 与PIPR的详细比较

使用端到端深度学习模型完成PPI任务两篇论文笔记相关推荐

  1. 【项目实战课】从零掌握安卓端Pytorch原生深度学习模型部署

    欢迎大家来到我们的项目实战课,本期内容是<从零掌握安卓端Pytorch原生深度学习模型部署>.所谓项目课,就是以简单的原理回顾+详细的项目实战的模式,针对具体的某一个主题,进行代码级的实战 ...

  2. 端上智能——深度学习模型压缩与加速

    摘要:随着深度学习网络规模的增大,计算复杂度随之增高,严重限制了其在手机等智能设备上的应用.如何使用深度学习来对模型进行压缩和加速,并且保持几乎一样的精度?本文将为大家详细介绍两种模型压缩算法,并展示 ...

  3. 端到端的TTS深度学习模型tacotron(中文语音合成)

    TACONTRON: A Fully End-to-End Text-To-Speech Synthesis Model 通常的TTS模型包含许多模块,例如文本分析, 声学模型, 音频合成等.而构建这 ...

  4. 进化计算在深度学习中的应用 | 附多篇论文解读

    随着当今计算能力的大幅度提升和大数据时代的到来,深度学习在帮助挖掘海量数据中蕴含的信息和完成一些人工智能任务中,展现出了非凡的能力.然而目前深度学习领域还有许多问题亟待解决,其中算法参数和结构的优化尤 ...

  5. 机器学习和深度学习引用量最高的20篇论文(2014-2017)

    转载自: https://blog.csdn.net/hll174/article/details/69808435 机器学习和深度学习的研究进展正深刻变革着人类的技术,本文列出了自 2014 年以来 ...

  6. 卷积神经网络在深度学习中新发展的5篇论文推荐

    转载自:Deephub Imba 1.Deformable CNN and Imbalance-Aware Feature Learning for Singing Technique Classif ...

  7. 深度学习——引用量最高的20篇论文(2014-2017)

    原文地址:http://www.kdnuggets.com/2017/04/top-20-papers-machine-learning.html 机器学习,尤其是其子领域深度学习,在近些年来取得了许 ...

  8. 深度学习被高频引用的41篇论文下载(附下载)

    来源:Python与算法社区 本文多干货,建议收藏 本文为你汇总深度学习相关高引论文. 1 ImageNet Classification with Deep Convolutional Neural ...

  9. 【Roofline 推理速度】影响深度学习模型推理速度的因素及相关基础知识

    文章目录 1 问题分析 2 计算平台角度分析 2.1 算力 π 2.2 带宽 β\betaβ 2.3 计算强度上限 ImaxI_{max}Imax​ 3 模型自身的性能评价指标 3.1 计算量与参数量 ...

  10. 【深度学习】基于web端和C++的两种深度学习模型部署方式

    深度学习 Author:louwill Machine Learning Lab 本文对深度学习两种模型部署方式进行总结和梳理.一种是基于web服务端的模型部署,一种是基于C++软件集成的方式进行部署 ...

最新文章

  1. 美多商城之商品(商品搜索)
  2. Java/Android基础-02
  3. 华宇输入法linux,华宇拼音输入法DEB版能切换为五笔输入法,附操作方法
  4. python恶搞小程序-抖音最火的整蛊表白小程序如何做出来的?教你用python做出
  5. java文件怎么建立关联_如何创建两个Java Web应用程序并相互关联jar依赖关系和其他文件?...
  6. 小小知识点(十五)——origin pro 2018 安装和消除demo字样
  7. 音视频技术开发周刊 | 132(FFmpeg决策委员会委员 刘歧)
  8. Python网络爬虫之图片懒加载技术、selenium和PhantomJS
  9. 计算机网络设置中如何删除家庭组,【求助】Windows无法从该家庭组中删除你的计算机...
  10. Linux应用程序设置进程调度策略
  11. table添加一行且可编辑 vue_vue表格添加可编辑的一行后如何得到整个表格的数据...
  12. 500个运营工具大全,速度收藏!!!
  13. matlab sbus,WIRIS Pro Sc科研级机载双摄热红外成像仪
  14. 《构建之法》CH5~6读书笔记 PB16110698 第九周(~5.15)
  15. java 货币符号_java使用Currency类获得指定国家的货币符号
  16. Fedora分区扩容以及如何修复引导
  17. 阿里巴巴校招内推一面总结
  18. ArithmeticException:“不结束的十进制扩展; 没有确切可表示的小数结果”
  19. excel2019保存文件为csv utf-8
  20. linux中shell的小括号、大括号的用法区别

热门文章

  1. 给UI/UX设计师推荐5个国外网站
  2. DevExpress 单元格的设置(可设字体、字号、前景色、背景色)
  3. ios设置阴历或农历生日(以iPhone X为例)
  4. android追美剧app,追美剧必备神器!安卓追剧助手App体验
  5. ctfshow(菜狗杯)
  6. 【(伪)数论】洛谷1943 Local Maxima
  7. 年审是当月还是当天_车辆年审时间当月到当月办理可以吗
  8. Linux 终端浏览器 w3m
  9. 诛仙mysql数据库清空_数据库管理,启动命令,输入密码,用户名,虚拟机诛仙zx1324-1345一键镜像端、纯端kfvip - Welcome to XiongTianQi.CN...
  10. 深度学习要多深,才能读懂人话?|阿里小蜜前沿探索