Image-to-Image Translation Using Conditional Adversarial Networks
night2day | edges2shoes | facades |
---|
cityscapes | maps |
---|
- pix2pix-cGAN-on-maps-dataset.ipynb(train)
- inference.py
You can download pretrained model from the table below
Dataset Name | Model |
---|---|
maps | download |
cityscapes | download |
edges2shoes | download |
facades | download |
night2day | download |
-
In Google Colabo with 12GB RAM
-
5K images of training set and in 40k steps
training:
- For training, you can run
pix2pix-cGAN-on-maps-dataset.ipynb
file.
1- For inference, first clone this repository using the following command:
https://github.com/NahidEbrahimian/paint-pix2pix-tensorflow
2- In ./paint-pix2pix-tensorflow
directory, run the following command to install requirements:
pip install -r requirements.txt
3- Then, according to the images above and model input and output according to the following, select your favorite dataset name that you want to run inference(for example: maps, cityscapes or ...)
model input and output for each dataset in images above:
-
for cityscapes,facades, maps --> right image = model input and left image = output
-
for edges2shoes, night2day --> right image = model input and left image = output
4- Put your input images in ./input/dataset_name
directory(dataset_name refers to favorite dataset name that you selected in previous step)
5- run the following command
dataset_name: favorite dataset name that selected in step2
input: your input image
python3 inference_img.py --input input/maps/01.jpg --dataset_name maps
19.11.2021_02.01.48_REC.mp4
1- For run Inference using paint and QT on edges2shoes dataset, Ffrst clone this repository using the following command:
git clone https://github.com/NahidEbrahimian/paint-pix2pix-tensorflow
2- In ./paint-pix2pix-tensorflow
directory, run the following command to install requirements:
pip install -r requirements.txt
3- Then, run the following command:
python3 qt.py