-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experiment Question #6
Comments
Hey @eds89 , no question is really silly :).
|
A follow-up question:
1) How can I change the experiment to have a different camera setting while
moving to the New Town (after the net is trained in Town01)?
Since the new camera will inevitably receive data, for which the net
had not been specifically trained for, I can already predict there will be
changes in the obtained results. I.e. more infractions or even inability to
drive the car at all. Iwant to know how much the new results differ from
the original ones.
…On 5 February 2018 at 08:40, felipecode ***@***.***> wrote:
Hey @eds89 <https://github.com/eds89> , no question is really silly :).
1.
Yes, during evaluation you need to have a CARLA running. Then you will
receive new images and control de car. The recorded images are only for
training the neural network.
2.
The forward facing camera was just RGB post-processed (scene-final),
no depth or semantic segmentation was used.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AQUNuS9NQRAQZQXf9a0_KR4L-XfqmDF2ks5tRr6agaJpZM4R3VCH>
.
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hello.
I've read the paper and I've got a few questions about the Imitation Learning experiment.
I'm not a deep learning expert, so some questions may sound silly.
When executing the navigation tasks, after training, do the trained system receives new images from the camera installed on the car? Do you still have the images recorded during the execution of the expriment?
What camera mode was used by the forward-facing camera? Also, what there any kind of post-processing applied to the camera (i.e. ground truth, semantic segmentation, or just scene final)?
The text was updated successfully, but these errors were encountered: