CHM-EVAL

Evaluate given VCF file with chm-eval (https://github.com/lh3/CHM-eval) for benchmarking variant calling.

Software dependencies

  • perl =5.26

Example

This wrapper can be used in the following way:

rule chm_eval:
    input:
        kit="resources/chm-eval-kit",
        vcf="{sample}.vcf"
    output:
        summary="chm-eval/{sample}.summary", # summary statistics
        bed="chm-eval/{sample}.err.bed.gz" # bed file with errors
    params:
        extra="",
        build="38"
    log:
        "logs/chm-eval/{sample}.log"
    wrapper:
        "0.67.0/bio/benchmark/chm-eval"

Note that input, output and log file paths can be chosen freely. When running with

snakemake --use-conda

the software dependencies will be automatically deployed into an isolated environment before execution.

Authors

  • Johannes Köster

Code

__author__ = "Johannes Köster"
__copyright__ = "Copyright 2020, Johannes Köster"
__email__ = "johannes.koester@uni-due.de"
__license__ = "MIT"

from snakemake.shell import shell

log = snakemake.log_fmt_shell(stdout=False, stderr=True)

kit = snakemake.input.kit
vcf = snakemake.input.vcf
build = snakemake.params.build
extra = snakemake.params.get("extra", "")

if not snakemake.output[0].endswith(".summary"):
    raise ValueError("Output file must end with .summary")
out = snakemake.output[0][:-8]

shell("({kit}/run-eval -g {build} -o {out} {extra} {vcf} | sh) {log}")