-
Notifications
You must be signed in to change notification settings - Fork 42
Benjamin Straus: Notes on ProgLearn Paper
Benjamin Straus edited this page Oct 21, 2020
·
2 revisions
Paper source here
Problem: existing techniques have performance degrade on past tasks while learning new ones. And, currently, recent approaches set the bar low by trying to “avoid forgetting”.
Representation ensembling as opposed to learning ensembling (e.g. bagging)
- Representation ensembling algorithms sequentially learn a representation for each task, and ensemble both old and new representations for all future decisions.
- This paper implements two complementary representation ensembling algorithms, one based on decision forests (Lifelong Forests), and another based on deep networks (Lifelong Net- works).
Forward TE Backward TE
Key innovation: “building decision rules that ensemble representations learned by transformers across tasks. In particular, a representation learned for task t might be a useful representation for task t′ and vice versa”
Simple environment Adversarial environments
The term “task” is used a lot here. What does it mean in a learning context? Like, how does it connect to the idea of random forests? Are additional tasks like having more than one forest?