CHM-EVAL

https://img.shields.io/github/issues-pr/snakemake/snakemake-wrappers/bio/benchmark/chm-eval?label=version%20update%20pull%20requests

Evaluate given VCF file with chm-eval for benchmarking variant calling.

URL: https://github.com/lh3/CHM-eval

Example

This wrapper can be used in the following way:

rule chm_eval:
    input:
        kit="resources/chm-eval-kit",
        vcf="{sample}.vcf",
    output:
        summary="chm-eval/{sample}.summary",  # summary statistics
        bed="chm-eval/{sample}.err.bed.gz",  # bed file with errors
    params:
        extra="",
        build="38",
    log:
        "logs/chm-eval/{sample}.log",
    wrapper:
        "v3.10.2-32-gf4e5b66/bio/benchmark/chm-eval"

Note that input, output and log file paths can be chosen freely.

When running with

snakemake --use-conda

the software dependencies will be automatically deployed into an isolated environment before execution.

Software dependencies

  • perl=5.32.1

Input/Output

Input:

  • kit: Path to annotation directory

  • vcf: Path to VCF to evaluate (can be gzipped)

Output:

  • summary: Path to statistics and evaluations

  • bed: Path to list of errors (BED formatted)

Params

  • build: Genome build. Either 37 or 38.

  • extra: Optional parameters besides -g

Authors

  • Johannes Köster

Code

__author__ = "Johannes Köster"
__copyright__ = "Copyright 2020, Johannes Köster"
__email__ = "johannes.koester@uni-due.de"
__license__ = "MIT"

from snakemake.shell import shell

log = snakemake.log_fmt_shell(stdout=False, stderr=True)

kit = snakemake.input.kit
vcf = snakemake.input.vcf
build = snakemake.params.build
extra = snakemake.params.get("extra", "")

if not snakemake.output[0].endswith(".summary"):
    raise ValueError("Output file must end with .summary")
out = snakemake.output[0][:-8]

shell("({kit}/run-eval -g {build} -o {out} {extra} {vcf} | sh) {log}")