GenMPI: Cluster Scalable Variant Calling for Short/Long Reads Sequencing Data

Journal Article (2025)
Author(s)

T. Ahmad (TU Delft - Computer Engineering)

J. Schuchart (The University of Tennessee Knoxville)

Z. Al Ars (TU Delft - Computer Engineering)

C. Niethammer (High-Performance Computing Center Stuttgart)

J. Gracia (High-Performance Computing Center Stuttgart)

H. P. Hofstee (TU Delft - Computer Engineering)

Research Group
Computer Engineering
DOI related publication
https://doi.org/10.1109/TCBBIO.2025.3595409
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Computer Engineering
Journal title
IEEE Transactions on Computational Biology and Bioinformatics
Downloads counter
4

Abstract

Rapid technological advancements in sequencing technologies allow producing cost effective and high volume sequencing data. Processing this data for real-time clinical diagnosis is potentially time-consuming if done on a single computing node. This work presents a complete variant calling workflow, implemented using the Message Passing Interface (MPI) to leverage the benefits of high bandwidth interconnects. This solution (GenMPI) is portable and flexible, meaning it can be deployed to any private or public cluster/cloud infrastructure. Any alignment or variant calling application can be used with minimal adaptation. To achieve high performance, compressed input data can be streamed in parallel to alignment applications while uncompressed data can use internal file seek functionality to eliminate the bottleneck of streaming input data from a single node. Alignment output can be directly stored in multiple chromosome-specific SAM files or a single SAM file. After alignment, a distributed queue using MPI RMA (Remote Memory Access) atomic operations is created for sorting, indexing, marking of duplicates (if necessary) and variant calling applications. We ensure the accuracy of variants as compared to the original single node methods. We also show that for 300x coverage data, alignment scales almost linearly up to 64 nodes (8192 CPU cores). Overall, this work outperforms existing big data based workflows by a factor of two and is almost 20% faster than other MPI-based implementations for alignment without any extra memory overheads. Sorting, indexing, duplicate removal and variant calling is also scalable up to 8 nodes cluster. For pair-end short-reads (Illumina) data, we integrated the BWA-MEM aligner and three variant callers (GATK HaplotypeCaller, DeepVariant and Octopus), while for long-reads data, we integrated the Minimap2 aligner and three different variant callers (DeepVariant, DeepVariant with WhatsHap for phasing (PacBio) and Clair3 (ONT)). All codes and scripts are available at: https://github.com/abs-tudelft/gen-mpi

No files available

Metadata only record. There are no files for this record.