DeepMiRT: miRNA Target Prediction with RNA Foundation Models
Predict miRNA-target interactions using RNA-FM embeddings and cross-attention. Ranked #1 on eCLIP benchmarks (AUROC 0.75) and achieves AUROC 0.96 on our comprehensive test set.
Paper: coming soon | GitHub: DeepMiRT | Model: Hugging Face
| miRNA Sequence | Target Sequence |
|---|
Upload a CSV file with columns containing mirna and target in the column names.
Example format:
| mirna_seq | target_seq |
|---|---|
| UGAGGUAGUAGGUUGUAUAGUU | ACUGCAGCAUAUCUACUAUUUGCUACUGUAACCAUUGAUCU |
Preview (first 20 rows)
Model Architecture
DeepMiRT uses a shared RNA-FM encoder (12-layer Transformer, pre-trained on 23M non-coding RNAs) to embed both miRNA and target sequences into the same representation space. A cross-attention module (2 layers, 8 heads) allows the target to attend to the miRNA, capturing interaction patterns. The attended representations are pooled and classified by an MLP head (640 → 256 → 64 → 1).
miRNA → [RNA-FM Encoder] → miRNA embedding ─────────┐
↓
Target → [RNA-FM Encoder] → target embedding → [Cross-Attention] → Pool → [MLP] → probability
Training
- Data: miRNA-target interactions from multiple databases and literature mining
- Two-phase training: Phase 1 (frozen backbone) → Phase 2 (unfreeze top 3 RNA-FM layers)
- Hardware: 2× NVIDIA L20 GPUs, mixed-precision (fp16)
- Best checkpoint: epoch 27, val AUROC = 0.9612
Performance
| Benchmark | AUROC | Rank |
|---|---|---|
| miRBench eCLIP (Klimentova 2022) | 0.7511 | #1/12 |
| miRBench eCLIP (Manakov 2022) | 0.7543 | #1/12 |
| miRBench CLASH (Hejret 2023) | 0.6952 | #5/12 |
| Our test set (813K samples, 16 methods) | 0.9606 | #1/16 |
Citation
If you use DeepMiRT in your research, please cite:
@software{liu2026deepmirt,
title={DeepMiRT: miRNA Target Prediction with RNA Foundation Models},
author={Liu, Zicheng},
year={2026},
url={https://github.com/zichengll/DeepMiRT}
}
License
MIT License. See LICENSE.