CHM-EVAL
Evaluate given VCF file with chm-eval for benchmarking variant calling.
URL: https://github.com/lh3/CHM-eval
Example
This wrapper can be used in the following way:
rule chm_eval:
input:
kit="resources/chm-eval-kit",
vcf="{sample}.vcf",
output:
summary="chm-eval/{sample}.summary", # summary statistics
bed="chm-eval/{sample}.err.bed.gz", # bed file with errors
params:
extra="",
build="38",
log:
"logs/chm-eval/{sample}.log",
wrapper:
"v5.6.1-7-g2ff6d79/bio/benchmark/chm-eval"
Note that input, output and log file paths can be chosen freely.
When running with
snakemake --use-conda
the software dependencies will be automatically deployed into an isolated environment before execution.
Software dependencies
perl=5.32.1
Input/Output
Input:
kit
: Path to annotation directoryvcf
: Path to VCF to evaluate (can be gzipped)
Output:
summary
: Path to statistics and evaluationsbed
: Path to list of errors (BED formatted)
Params
build
: Genome build. Either 37 or 38.extra
: Optional parameters besides -g
Code
__author__ = "Johannes Köster"
__copyright__ = "Copyright 2020, Johannes Köster"
__email__ = "johannes.koester@uni-due.de"
__license__ = "MIT"
from snakemake.shell import shell
log = snakemake.log_fmt_shell(stdout=False, stderr=True)
kit = snakemake.input.kit
vcf = snakemake.input.vcf
build = snakemake.params.build
extra = snakemake.params.get("extra", "")
if not snakemake.output[0].endswith(".summary"):
raise ValueError("Output file must end with .summary")
out = snakemake.output[0][:-8]
shell("({kit}/run-eval -g {build} -o {out} {extra} {vcf} | sh) {log}")