Genome Analysis Tool Kit* (GATK*)

A software package developed at the Broad Institute to analyze next-generation sequencing data

Infrastructure for Deploying GATK Best Practices Pipeline

The Broad Institute GATK Best Practices pipeline has helped standardize genomic analysis by providing step-by-step recommendations for performing pre-processing and variant discovery analysis. Pre-processing refers to generating analysis-ready mapped reads from raw reads using tools like BWA*, Picard* tools, and the Genome Analysis Tool Kit. These analysis-ready reads are passed through the Variant Calling step of Variant Discovery analysis to generate variants per-sample. The first part of the GATK Best Practices pipeline takes two FASTQ files, a reference genome, and dbSNP and 1000g_indels VCF files as input and outputs a gVCF file per-sample. These gVCF files are then further analyzed using Joint Genotyping and Variant Filtering steps of the Variant Discovery analysis.

The tools mentioned in the GATK Best Practices Pipeline require enormous computational power and long periods of time to complete. Benchmarking such a pipeline allows users to better determine the recommended hardware and optimize parameters to help reduce execution time. In an effort to advance the standardization and optimization of genomic pipelines, Intel has benchmarked the GATK Best Practices pipeline using Workflow Profiler, an open-source tool that provides insight into system resources (such as CPU/Disk Utilization, Committed Memory, etc.) and helps eliminate resource bottlenecks.

Performance Results

By using the recommended hardware and applying the thread-level and process-level optimizations to the single sample Solexa-272221 WGS* dataset, we achieve different levels of performance. The chart to the right shows how the execution time scales with the number of threads and processes across various pipeline components. For this particular dataset, all components show a decrease in run time going from 1 to 36 threads. Overall, the execution time from BWA-MEM* to Haplotype-Caller went from 227 hours to 36 hours, a 6x speed-up1. These performance guidelines can be used to size genomics clusters running GATK Best Practices pipelines.

This benchmarking study provides recommendations of Intel® hardware and guidelines on running a set of whole genome sequences through the GATK Best Practices pipeline. Researchers whose aim is to use this pipeline for multiple datasets may use this paper to scale the number of machines to match the number of datasets that require analysis. For example, an institution whose aim is to analyze 100 WGS a month may need about 5 machines (each with 36 cores) running in parallel to achieve this goal.

Download the code ›

Reproduce these results with this optimization recipe ›

Size and scale your infrastructure according to your workloads with the GATK Reference Architecture ›



Product and Performance Information


Benchmark results were obtained prior to implementation of recent software patches and firmware updates intended to address exploits referred to as "Spectre" and "Meltdown". Implementation of these updates may make these results inapplicable to your device or system.

Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit

Intel is a sponsor and member of the BenchmarkXPRT Development Community, and was the major developer of the XPRT family of benchmarks. Principled Technologies is the publisher of the XPRT family of benchmarks.