Skip to content

Latest commit

 

History

History
5 lines (4 loc) · 1.52 KB

File metadata and controls

5 lines (4 loc) · 1.52 KB

SIGIR 2019 and before

Year Title Author Publication Code Tasks Notes Datasets Notions
2019 Efficient and Effective Text-Annotation through Active Learning Zlabinger SIGIR - Cold Start, Human annotators might make labeling mistakes, text classification The most commonly used active learning criterion is uncertainty sampling, where a supervised model is used to predict uncertain samples that should be labeled next by a human annotator. When using active learning, two problems are encountered: First, the supervised model has a cold-start problem in the beginning of the active learning process, and second, the human annotators might make labeling mistakes. In my Ph.D. research, I address the two problems with the development of an unsupervised method for the computation of informative samples. The informative samples are first manually labeled and then used for both: The training of the initial active learning model and the training of the human annotators in form of a learning-by-doing session. The planned unsupervised method will be based on word-embeddings and it will be limited to the area of text classification.