| |

Better Performance with Transformer: CPPFormer in precise prediction of cell-Penetrating Peptides.

Researchers

Journal

Modalities

Models

Abstract

With its superior performance, the Transformer model, which is based on the ‘Encoder-Decoder’ paradigm, has become the mainstream in natural language processing. On the other hand, bioinformatics has embraced machine learning and made great progress in drug design and protein property prediction. Cell-penetrating peptides (CPPs) are one kind of permeable protein that is convenient as a kind of ‘postman’ in drug penetration tasks. However, a small number of CPPs have been discovered by research, let alone practical applications in drug permeability. Therefore, correctly identifying the CPPs has opened up a new way to take macromolecules into cells without other potentially harmful materials in the drug. Most of the previous work only uses trivial machine learning techniques and hand-crafted features to construct a simple classifier. In CPPFormer, we learn from the idea of implementing the attention structure of Transformer, rebuilding the network based on the characteristics of CPPs according to its short length, and using an automatic feature extractor with a few manual engineered features to co-direct the predicted results. Compared to all previous methods and other classic text classification models, the empirical result has shown that our proposed deep model-based method has achieved the best performance of 92.16% accuracy in the CPP924 dataset and has passed various index tests.Copyright© Bentham Science Publishers; For any queries, please email at [email protected].

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *