Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experiment Question #6

Open
eds89 opened this issue Feb 2, 2018 · 2 comments
Open

Experiment Question #6

eds89 opened this issue Feb 2, 2018 · 2 comments

Comments

@eds89
Copy link

eds89 commented Feb 2, 2018

Hello.
I've read the paper and I've got a few questions about the Imitation Learning experiment.

I'm not a deep learning expert, so some questions may sound silly.

  1. When executing the navigation tasks, after training, do the trained system receives new images from the camera installed on the car? Do you still have the images recorded during the execution of the expriment?

  2. What camera mode was used by the forward-facing camera? Also, what there any kind of post-processing applied to the camera (i.e. ground truth, semantic segmentation, or just scene final)?

@felipecode
Copy link
Contributor

Hey @eds89 , no question is really silly :).

  1. Yes, during evaluation you need to have a CARLA running. Then you will receive new images and control de car. The recorded images are only for training the neural network.

  2. The forward facing camera was just RGB post-processed (scene-final), no depth or semantic segmentation was used.

@eds89
Copy link
Author

eds89 commented Feb 5, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants