Post-translational modifications (PTMs) regulate protein function and cell signaling, yet accurate residue-level prediction remains challenging across diverse PTM types. CLASPP is a contrastively learned, attention-based model that integrates protein sequence embeddings with PTM-specific stratified heads. Contrastive pretraining separates modified and unmodified contexts. Multi-head attention highlights sequence motifs and provides residue-level saliency for interpretation. CLASPP delivers calibrated probabilities, supports common and rare PTMs, and generalizes under strict cross-protein splits. In benchmarks across phosphorylation, acetylation, ubiquitination, and glycosylation tasks, CLASPP matches or exceeds strong baselines while producing interpretable attention maps. We release model weights, training code, and an API for large-scale screening.
