Skip to content

Code for NeurIPS NewInML 2023 Extended Abstract on Multi Teacher Knowledge Distillation for Leukemia Classification

Notifications You must be signed in to change notification settings

sajontahsen/leukemia-multi-teacher-kd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Scalable Multi-Teacher Gated Knowledge Distillation for Accurate & Efficient Leukemia Classification

Recent advancements in deep learning have driven significant progress in medical imaging, notably in automated leukemia classification. However, deploying state-of-the-art models in real-world applications can be prohibitive due to their computational and memory overhead. In this context, this study introduces a multi-teacher knowledge distillation framework, complemented by a learned gating mechanism. This gate is designed to dynamically allocate weights to the teacher models during the distillation process, promoting accurate predictions in the student model. Swin Transformer and ConvNext models serve as the chosen teachers, with the intention to harness both the robust representational capabilities of transformers and the locality-preserving attributes of convolutions. Employing this strategy to guide a lightweight MobileNetV2 student model, an evident improvement in performance was achieved over conventional training techniques.

Update: Extended abstract accepted in NeurIPS 2023 NewInML Workshop

Knowledge Distillation Flow


Figure: train step for the student & gate model

Code

Teacher Models Pre-Training Notebooks

KD Framework

Student Standalone Training Notebook

About

Code for NeurIPS NewInML 2023 Extended Abstract on Multi Teacher Knowledge Distillation for Leukemia Classification

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published