-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adapt performance degrades when using several consecuitive times #133
Comments
Hi @mmorsy1981, |
Thanks for your prompt response. Validating the Adapt model training using the target dataset showed remarkably high variability in the validation results. Due to this, the testing accuracy degrades significantly. Any suggestions to deal with this issue? |
Hi @mmorsy1981,
A minor suggestion is to remove |
I defined the following DANN models. The DANN model starts performing well, then the accuracy drops (under 0.1 for some cases) for both unadapted and adapted models. Can you explain this behavior and how to handle it. @antoinedemathelin @GRichard513 @AlejandrodelaConcha @BastienZim @atiqm
`def get_encoder():
inp = Input(shape=np.expand_dims(XA_env,-1).shape[1:], name="Signal_Stack")
x = BatchNormalization()(inp)
x = Dropout(0.2)(x)
x = Conv1D(H.shape[1], H.shape[-1], use_bias=False, padding='same', name='Conv1D_L0')(x)
x = Activation('tanh')(x)
x = GlobalMaxPooling1D()(x)
x = Dense(x.shape[-1], activation='relu')(x)
model = Model(inputs=[inp], outputs=[x])
return model
enc_out_shape = get_encoder().output_shape
def get_task():
inp = Input(shape= enc_out_shape[-1], name="Signal_Stack")
x = Dense(inp.shape[-1], activation='relu')(inp)
x = Dropout(0.2)(x)
x = Dense(num_classes, activation='softmax', name = 'OutputLayer')(x)
model = Model(inputs=[inp], outputs=[x])
return model
def get_discriminator():
inp = Input(shape= enc_out_shape[-1], name="Signal_Stack")
x = Dense(inp.shape[-1], activation='relu')(inp)
x = Dropout(0.2)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model(inputs=[inp], outputs=[x])
return model
for i in range(4):
for j in range(4):
DANN_model = DANN(encoder = get_encoder(), discriminator = get_discriminator(), task = get_task(), lambda_=0.5)
DANN_model.compile(loss='categorical_crossentropy', optimizer=Adam(0.001), metrics=["acc"])
DANN_model.fit(X = X[i], y = y[i], Xt = X[j], batch_size=32, epochs=100, shuffle=True)
#X and y denote a partitioned dataset with domain shift between various partitions
print(i ,j, DANN_model.score(X[i], y[i], DANN_model.score(X[j], y[j]) `
The text was updated successfully, but these errors were encountered: