diff --git a/README.md b/README.md
index d34d987..10498a0 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,10 @@
-# MEDIPIPE: (cf)MeDIP-seq Data QC and Analysis Pipeline (v1.0.0)
+# MEDIPIPE: (cf)MeDIP-seq Data QC and Analysis Pipeline (v1.1.0)
## Intoduction
The MEDIPIPE is designed for automated end-to-end quality control (QC) and analysis of (cf)MeDIP-seq data. The pipeline starts from the raw FASTQ files all the way to QC, alignment, methylation quantifications and aggregation. The pipeline was developed by [Yong Zeng](mailto:yzeng@uhnresearch.ca) based on some prior work of Wenbin Ye, [Eric Zhao](https://github.com/pughlab/cfMeDIP-seq-analysis-pipeline).
### Features
-- **Portability**: MEDIPIPE was developed with [Snakemake](https://snakemake.readthedocs.io/en/stable/index.html), which will automatically deploy the execution environments. It can also be performed across different cluster engines (e.g. SLURM) or stand-alone machines.
+- **Portability**: MEDIPIPE was developed with [Snakemake](https://snakemake.readthedocs.io/en/stable/index.html), which will automatically deploy the execution environments. Users can also specify the version of softwares to be install in YAML files. It can also be performed across different cluster engines (e.g. SLURM) or stand-alone machines.
- **Flexibility**: MEDIPIPE can deal with single-end or paired-end reads, which comes along with or without spike-in/UMI sequences. It can be applied to individual samples, as well as to aggregate multiple samples from large-scale profiling.
### Citation
@@ -16,7 +16,7 @@ This schematic diagram shows you how pipeline works:
## Installation
-1) Make sure that you have a Conda-based Python3 distribution(e.g.,the [Miniconda](https://docs.conda.io/en/latest/miniconda.html)). The Miniconda3-py38_23.3.1-0-Linux-x86_64.sh for Linux is prefered to avoid potential cnflicts. The installation of [Mamba](https://github.com/mamba-org/mamba) is also recommended:
+1) Make sure that you have a Conda-based Python3 distribution(e.g.,the [Miniconda](https://docs.conda.io/en/latest/miniconda.html)). The Miniconda3-py38_23.3.1-0-Linux-x86_64.sh for Linux is prefered to avoid potential conflicts. The installation of [Mamba](https://github.com/mamba-org/mamba) is also recommended:
```bash
$ conda install -n base -c conda-forge mamba
@@ -39,7 +39,7 @@ This schematic diagram shows you how pipeline works:
> **IMPORTANT**: EXTRA ENVIRONMENTS WILL BE INSTALLED, MAKE SURE YOU STILL HAVE INTERNET ACCESS.
* **Step 1:** Prepare reference, samples FASTQ and aggregation files according to [templates](./test/README.md).
* **Step 2:** Specify input configuration file by following the instructions [here](./test/README.md).
- * **NOTE:** For testing run, you can simply run the SED command below to specify files in Step1,2.The toy dataset will fail to fit sigmoid model for MEDStrand due to low read depth, but you can still find other results in ./test/Res.
+ * **NOTE:** For testing run, you can simply run the SED command below to specify files in Step1, 2. This test dataset aims for a swift extra enviroments installation, which will fail to fit sigmoid model for MEDStrand due to low read depth, but you can still find other results in ./test/Res.
```bash
$ sed -i 's,/path/to,'"$PWD"',g' ./test/*template.*
@@ -50,6 +50,8 @@ This schematic diagram shows you how pipeline works:
--use-conda --cores 4 -p
```
+ * **Step 3 (Optional):** You can perform the full-scale test run using another [toy dataset](./test/README.md) by following step 1, 2.
+
5) Run on HPCs
You can also submit this pipeline to clusters with the template ./workflow/sbatch_Snakefile_template.sh. This template is for SLURM, however, it could be modified to different resource management systems. More details about cluster configuration can be found at [here](https://snakemake.readthedocs.io/en/stable/executing/cluster.html).
diff --git a/assets/README.md b/assets/README.md
index adea8f6..df38cd2 100644
--- a/assets/README.md
+++ b/assets/README.md
@@ -10,15 +10,15 @@ You can download reference genome, pre-build BWA index and annotated regions (e.
```bash
## eg: ./download_build_reference.sh hg38 /your/genome/data/path/hg38
-$ ./workflow/download_reference.sh [GENOME] [DEST_DIR]
+$ ./assets/Reference/download_reference.sh [GENOME] [DEST_DIR]
```
* Build reference genomes index
If your sequencing libraries come with spike-ins, you can build new aligner index after combining spike-in genome with human genome. The new index information will be appended to corresponding manifest file.
```bash
-## eg: ./build_reference_index.sh hg38 ./data/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38
-$ ./workflow/build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]
+## eg: ./assets/Reference/build_reference_index.sh hg38 ./data/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38
+$ ./assets/Reference/build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]
```
diff --git a/assets/Reference/build_reference_index.sh b/assets/Reference/build_reference_index.sh
index 48189da..40c1012 100755
--- a/assets/Reference/build_reference_index.sh
+++ b/assets/Reference/build_reference_index.sh
@@ -1,9 +1,8 @@
#!/bin/bash
## the script will build BWA index for combined human and spike-in genomes.
-## "Usage: ./build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]"
-## "Example: ./build_reference_index.sh hg38 ./data/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38"
-## "Example: ./build_reference_index.sh hg19 ./data/BAC_F19K16_F24B22.fa hg19_BAC_F19K16_F24B22 /cluster/projects/tcge/DB/cfmedip-seq-pepeline/hg19"
+## "Usage: ./assets/Reference/build_reference_index.sh [GENOME] [SPIKEIN_FA] [INDEX_PREFIX] [DEST_DIR]"
+## "Example: ./assets/Reference/build_reference_index.sh hg38 ./assets/Spike-in_genomes/BAC_F19K16_F24B22.fa hg38_BAC_F19K16_F24B22 /your/genome/data/path/hg38"
#################
## initilizaiton
@@ -33,7 +32,7 @@ cat ${hg_fa} ${SPIKEIN_FA} > ${DEST_DIR}/${INDEX_PREFIX}.fa
cd ${DEST_DIR}
echo "=== Building bwa index for mereged genomes ..."
-conda activate tcge-cfmedip-seq-pipeline
+conda activate MEDIPIPE
bwa index -a bwtsw ${INDEX_PREFIX}.fa
diff --git a/assets/Reference/download_reference.sh b/assets/Reference/download_reference.sh
index 65ae2b4..8c3e483 100755
--- a/assets/Reference/download_reference.sh
+++ b/assets/Reference/download_reference.sh
@@ -7,8 +7,8 @@
## "A TSV file [DEST_DIR]/[GENOME].tsv will be generated. Use it for pipeline."
## "Supported genomes: hg19 and hg38"; Arabidopsis TAIR10 genome will be downloaded,
## as well as building bwa index for merged genomes.
-## "Usage: ./download_build_reference.sh [GENOME] [DEST_DIR]"
-## "Example: ./download_build_reference.sh hg38 /your/genome/data/path/hg38"
+## "Usage: ./assets/Reference/download_build_reference.sh [GENOME] [DEST_DIR]"
+## "Example: ./assets/Reference/download_build_reference.sh hg38 /your/genome/data/path/hg38"
#################
@@ -47,7 +47,7 @@ if [[ "${GENOME}" == "hg38" ]]; then
PROM="https://www.encodeproject.org/files/ENCFF140XLU/@@download/ENCFF140XLU.bed.gz"
ENH="https://www.encodeproject.org/files/ENCFF212UAV/@@download/ENCFF212UAV.bed.gz"
- REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"
+ #REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"
fi
@@ -68,7 +68,7 @@ if [[ "${GENOME}" == "hg19" ]]; then
ENH="https://storage.googleapis.com/encode-pipeline-genome-data/hg19/ataqc/reg2map_honeybadger2_dnase_enh_p2.bed.gz"
## Arabidopsis
- REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"
+ # REF_FA_TAIR10="https://www.arabidopsis.org/download_files/Genes/TAIR10_genome_release/TAIR10_chromosome_files/TAIR10_chr_all.fas"
fi
@@ -84,12 +84,12 @@ wget -c -O $(basename ${REF_MITO_FA}) ${REF_MITO_FA}
wget -c -O $(basename ${CHRSZ}) ${CHRSZ}
## TAIR10
-wget -c -O $(basename ${REF_FA_TAIR10}) ${REF_FA_TAIR10}
-sed -i -e 's/^>/>tair10_chr/' TAIR10_chr_all.fas
-gzip TAIR10_chr_all.fas
+#wget -c -O $(basename ${REF_FA_TAIR10}) ${REF_FA_TAIR10}
+#sed -i -e 's/^>/>tair10_chr/' TAIR10_chr_all.fas
+#gzip TAIR10_chr_all.fas
## combine genomes
-cat $(basename ${REF_FA}) TAIR10_chr_all.fas.gz > ${GENOME}_tair10.fa.gz
+# cat $(basename ${REF_FA}) TAIR10_chr_all.fas.gz > ${GENOME}_tair10.fa.gz
## annotated regions
wget -N -c ${BLACKLIST}
diff --git a/conda_env.yaml b/conda_env.yaml
index 135ca2d..d5f48c0 100644
--- a/conda_env.yaml
+++ b/conda_env.yaml
@@ -7,6 +7,6 @@ dependencies:
- snakemake=6.12.3
- bwa=0.7.17
- trim-galore=0.6.7
- - samtools=1.9
+ - samtools=1.17
- graphviz=2.40.1
- picard=2.26.6
diff --git a/figures/MEDIPIPE_Flowchart.png b/figures/MEDIPIPE_Flowchart.png
index 2db1d76..03315ad 100644
Binary files a/figures/MEDIPIPE_Flowchart.png and b/figures/MEDIPIPE_Flowchart.png differ
diff --git a/test/Fastq/toy_sample1_R1.fastq.gz b/test/Fastq/toy_sample1_R1.fastq.gz
new file mode 100644
index 0000000..9f48d19
Binary files /dev/null and b/test/Fastq/toy_sample1_R1.fastq.gz differ
diff --git a/test/Fastq/toy_sample1_R2.fastq.gz b/test/Fastq/toy_sample1_R2.fastq.gz
new file mode 100644
index 0000000..ff22ec6
Binary files /dev/null and b/test/Fastq/toy_sample1_R2.fastq.gz differ
diff --git a/test/Fastq/toy_sample2_R1.fastq.gz b/test/Fastq/toy_sample2_R1.fastq.gz
new file mode 100644
index 0000000..0d8d6ce
Binary files /dev/null and b/test/Fastq/toy_sample2_R1.fastq.gz differ
diff --git a/test/Fastq/toy_sample2_R2.fastq.gz b/test/Fastq/toy_sample2_R2.fastq.gz
new file mode 100644
index 0000000..d6b5ba7
Binary files /dev/null and b/test/Fastq/toy_sample2_R2.fastq.gz differ
diff --git a/test/README.md b/test/README.md
index 0bbcf98..c4c9bf3 100644
--- a/test/README.md
+++ b/test/README.md
@@ -31,9 +31,12 @@
A config YAML file specifies all PATHs of input files and parameters that are necessary for successfully running this pipeline. This includes a specification of the path to the genome reference files. Please make sure to specify absolute paths rather than relative paths in your config files. More detailed instructions can be found at the [config_template](./test/config_template.yaml)
-## Test dataset
+## Test datasets
-The current toy cfMeDIP-seq data is randomly derived from the two BACs ([F19K16](https://www.arabidopsis.org/servlets/TairObject?type=assembly_unit&id=362) and [F24B22](https://www.arabidopsis.org/servlet/TairObject?type=AssemblyUnit&name=F24B22)) Arabidopsis by [Min Han](https://github.com/mhanbioinfo/make_toy_fastqs)
+There are two randomly generate cfMeDIP-seq testing datasets, more details can be found [here](https://github.com/mhanbioinfo/make_toy_fastqs).
-`Reference/`: Genome reference and BWA index (Athaliana.BAC.F19K16.F24B22 )
-`Fastq/` : Randomly generated paired-end FastQ reads
+`Reference/`: Genome reference and BWA index (Athaliana.BAC.F19K16.F24B22) for extra environments installation.
+
+`Fastq/` : Randomly generated paired-end FASTQ reads. Sample A, B were derived from two Athaliana BACs; toy sample 1, 2 incopporated both human and two BACs sequences with UMI barcodes.
+
+`Res/` : The outcomes for test run (sample A, B) will be depoisted here. In addtion, the aggregated QC reports for the 2 toy samples and 163 brain cancer samples from this [study](https://www.nature.com/articles/s41591-020-0932-2) were attched.
diff --git a/test/aggr_qc_report_demo.html b/test/Res/aggr_qc_report_real.html
similarity index 100%
rename from test/aggr_qc_report_demo.html
rename to test/Res/aggr_qc_report_real.html
diff --git a/test/Res/aggr_qc_report_toy.html b/test/Res/aggr_qc_report_toy.html
new file mode 100644
index 0000000..60fc6ca
--- /dev/null
+++ b/test/Res/aggr_qc_report_toy.html
@@ -0,0 +1,3791 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+Aggregated_QC_Report
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
This report is generated by the tcge-cfmedip-seq-pipeline
+
+
Selected QC metrics
+
+- NOTE: The heatmap is drawn based on transformed Z scores. In
+general, the darker, the worse! Specifically:
+
+- raw_reads_depth, usable_reads_depth, saturation_maxEstCor and
+enrichment_GoGe: the darker, the lower
+- coverage_pctReadsWoCpG and coverage_pctCpGwoRead: the darker, the
+higher
+- fragment_size_mode: the darker, the further away from the mean mode
+[only for paired-end reads]
+
+
+
+
+
+
+
Reads QC metrics
+
+
Summary statistics for raw, prefilter and aligned reads
+
+- raw_reads_depth: number of raw read pairs [Note: it refers to the
+prefilter_reads for single-end]
+- prefilter_reads: reads or read pairs after QC, adapter trimming and
+UMI barcode extraction (p.r.n)
+- mapped_reads_pct: percentage of mapped reads or read pairs over the
+prefileter_reads
+- usable_reads: reads or read pairs after removing the duplication.
+reads pairs require properly paired as well
+- usable_reads_pct: percentage of usable reads over raw
+reads
+
+
+Number of samples: 2
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+Min |
+1400000 |
+1399622 |
+99.97 |
+51 |
+92.88 |
+100 |
+1323813 |
+94.56 |
+
+
+Q1 |
+1400000 |
+1399622 |
+99.97 |
+51 |
+93.04 |
+100 |
+1324792 |
+94.63 |
+
+
+Median |
+1400000 |
+1399622 |
+99.97 |
+51 |
+93.19 |
+100 |
+1325772 |
+94.70 |
+
+
+Mean |
+1400000 |
+1399622 |
+99.97 |
+51 |
+93.19 |
+100 |
+1325772 |
+94.70 |
+
+
+Q3 |
+1400000 |
+1399622 |
+99.97 |
+51 |
+93.35 |
+100 |
+1326752 |
+94.77 |
+
+
+Max |
+1400000 |
+1399622 |
+99.97 |
+51 |
+93.51 |
+100 |
+1327731 |
+94.84 |
+
+
+
+
+
+
Usable read depth per sample
+
+
+
+
+
Usable read depth percentages against raw read depth per sample
+
+
+
+
+
+
Fragment QC metrics
+
+
+
Fragment size Mode
+
+
+
+
+
Fragment size ranges with >=80% properly paired reads
+
+- Calculated by based on PICARD MEDIAN_INSERT_SIZE +
+WIDTH_OF_80_PERCENT/2
+
+- WIDTH_OF_80_PERCENT: The “width” of the bins, centered around the
+median, that encompass 80% of all read pairs.
+
+
+
+
+
+
+
+
MEDIPS QC metrics
+
+
Saturation and Coverage Metrics
+
+Number of samples: 2
+
+
+
+
+
+
+
+
+
+
+
+
+
+Min |
+0.98 |
+1.27 |
+89.68 |
+6.590 |
+0.57 |
+
+
+Q1 |
+0.98 |
+1.27 |
+89.69 |
+6.605 |
+0.57 |
+
+
+Median |
+0.98 |
+1.27 |
+89.70 |
+6.620 |
+0.57 |
+
+
+Mean |
+0.98 |
+1.27 |
+89.70 |
+6.620 |
+0.57 |
+
+
+Q3 |
+0.98 |
+1.27 |
+89.72 |
+6.635 |
+0.57 |
+
+
+Max |
+0.98 |
+1.27 |
+89.73 |
+6.650 |
+0.57 |
+
+
+
+
+
+
Coverage: percentage of usable reads without CpG, and CpGs without
+reads
+
+
+
+
+
+
+
Enrichment score per sample
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
diff --git a/workflow/rules/extra_env/umi_tools.yaml b/workflow/rules/extra_env/umi_tools.yaml
index a8a2acf..7d8fab5 100644
--- a/workflow/rules/extra_env/umi_tools.yaml
+++ b/workflow/rules/extra_env/umi_tools.yaml
@@ -3,4 +3,4 @@ channels:
- bioconda
dependencies:
- umi_tools=1.0.1
- - samtools=1.9
+ - samtools=1.17