You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Maybe we can add a bash file for training and creating experiment which will explicitly avoid:
submit jobs 3 times because slurm can only handle 1000 tasks per time;
mkdir for new experiment and copy the .py and .slurm file into new directory.
maybe we can also generate:
a list of number of gpis if best model or shap values are not saved...because there is always bug where shape of shap values are not equal to shape of lat/lon/gpi/trained_model....
a list of RMSE and MAE or so instead of .log file. If we submit 3 times, .log files of same job_id will be overwritten.
The text was updated successfully, but these errors were encountered:
Maybe we can add a bash file for training and creating experiment which will explicitly avoid:
maybe we can also generate:
The text was updated successfully, but these errors were encountered: