Skip to content

Benjamin Straus: Notes on ProgLearn Paper

Benjamin Straus edited this page Oct 21, 2020 · 2 revisions

Paper source here

Abstract

Goal: improve performance on all tasks (including past and future) with any new data

Problem: existing techniques have performance degrade on past tasks while learning new ones. And, currently, recent approaches set the bar low by trying to “avoid forgetting”.

Solution: Use progressive learning to actually improve performance on all tasks.

Representation ensembling as opposed to learning ensembling (e.g. bagging)

  • Representation ensembling algorithms sequentially learn a representation for each task, and ensemble both old and new representations for all future decisions.
  • This paper implements two complementary representation ensembling algorithms, one based on decision forests (Lifelong Forests), and another based on deep networks (Lifelong Net- works).

Other topics

Transfer efficiency

Forward TE Backward TE

Representation Ensembling

Key innovation: “building decision rules that ensemble representations learned by transformers across tasks. In particular, a representation learned for task t might be a useful representation for task t′ and vice versa”

Illustrations

Simple environment Adversarial environments

Questions

The term “task” is used a lot here. What does it mean in a learning context? Like, how does it connect to the idea of random forests? Are additional tasks like having more than one forest?