Skip to content

Commit

Permalink
Merge pull request #531 from neurodata/shakespeare
Browse files Browse the repository at this point in the history
ENH & DOC update algorithm descriptions
  • Loading branch information
jdey4 authored Jan 17, 2022
2 parents 8290ec4 + d246c60 commit f2afc14
Show file tree
Hide file tree
Showing 22 changed files with 180 additions and 199 deletions.
26 changes: 7 additions & 19 deletions docs/experiments/fte_bte_aircraft_bird.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
"cells": [
{
"cell_type": "markdown",
"id": "three-dakota",
"metadata": {},
"source": [
"# FTE/BTE Experiment for Aircraft & Birdsnap\n",
Expand All @@ -24,7 +23,6 @@
{
"cell_type": "code",
"execution_count": 1,
"id": "funny-harvest",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -34,7 +32,6 @@
},
{
"cell_type": "markdown",
"id": "tropical-approval",
"metadata": {},
"source": [
"### Load tasks\n",
Expand All @@ -51,7 +48,6 @@
{
"cell_type": "code",
"execution_count": 2,
"id": "engaged-darwin",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -73,7 +69,6 @@
{
"cell_type": "code",
"execution_count": 3,
"id": "bizarre-defendant",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -86,7 +81,6 @@
},
{
"cell_type": "markdown",
"id": "anticipated-diving",
"metadata": {},
"source": [
"### Sample images\n",
Expand All @@ -97,7 +91,6 @@
{
"cell_type": "code",
"execution_count": 4,
"id": "south-government",
"metadata": {},
"outputs": [
{
Expand All @@ -119,29 +112,27 @@
},
{
"cell_type": "markdown",
"id": "dynamic-terrain",
"metadata": {},
"source": [
"### Run progressive learning\n",
"### Run synergistic learning\n",
"\n",
"Here we provide two options of implementations of progressive learning: \n",
"Here we provide two options of implementations of synergistical learning: \n",
"\n",
"- omnidirectional forest (Odif), which uses uncertainty forests as the base representer\n",
"- omnidirectional networks (Odin), which uses a deep network as the base representer.\n",
"- synergistic forest (SynF), which uses uncertainty forests as the base representer\n",
"- synergistic network (SynN), which uses a deep network as the base representer.\n",
"\n",
"Use `odif` for omnidirectional forest and `odin` for omnidirectional networks."
"Use `synf` for synergistic forest and `synn` for synergistic network."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "angry-margin",
"metadata": {},
"outputs": [],
"source": [
"from functions.fte_bte_aircraft_bird_functions import single_experiment\n",
"\n",
"model = \"odif\" # Choose 'odif' or 'odin'\n",
"model = \"synf\" # Choose 'synf' or 'synn'\n",
"ntrees = 10 # Number of trees\n",
"num_repetition = 30\n",
"\n",
Expand All @@ -157,7 +148,6 @@
},
{
"cell_type": "markdown",
"id": "fabulous-sunset",
"metadata": {},
"source": [
"### Calculate and plot transfer efficiency"
Expand All @@ -166,7 +156,6 @@
{
"cell_type": "code",
"execution_count": 6,
"id": "adopted-cuisine",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -178,7 +167,6 @@
{
"cell_type": "code",
"execution_count": 7,
"id": "casual-probe",
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -215,7 +203,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
"version": "3.8.5"
}
},
"nbformat": 4,
Expand Down
22 changes: 11 additions & 11 deletions docs/experiments/fte_bte_dtd.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"\n",
"The progressive learning package utilizes representation ensembling algorithms to sequentially learn a representation for each task and ensemble both old and new representations for all future decisions. \n",
"\n",
"Here, a representation ensembling algorithm based on decision forests (Odif) and an algorithm based on neural networks (Odin) demonstrate forward and backward knowledge transfer of tasks on the Describable Textures Dataset (DTD). The original dataset can be found at https://www.robots.ox.ac.uk/~vgg/data/dtd/.\n",
"Here, a representation ensembling algorithm based on decision forests (SynF) and an algorithm based on neural networks (SynN) demonstrate forward and backward knowledge transfer of tasks on the Describable Textures Dataset (DTD). The original dataset can be found at https://www.robots.ox.ac.uk/~vgg/data/dtd/.\n",
"\n",
"### Import necessary packages and modules"
]
Expand Down Expand Up @@ -167,8 +167,8 @@
"metadata": {},
"outputs": [],
"source": [
"# Choose algorithm (odif or odin)\n",
"model = \"odin\""
"# Choose algorithm (synf or synn)\n",
"model = \"synn\""
]
},
{
Expand Down Expand Up @@ -205,9 +205,9 @@
" for z in range(which_task):\n",
" for y in range(1):\n",
" for x in range(4):\n",
" if model == \"odin\":\n",
" if model == \"synn\":\n",
" acc_x.append(acc[x][y][\"task_accuracy\"][z])\n",
" elif model == \"odif\":\n",
" elif model == \"synf\":\n",
" acc_x.append(acc[0][x][y][\"task_accuracy\"][z])\n",
" acc_y.append(np.mean(acc_x))\n",
" acc_x = []\n",
Expand All @@ -224,9 +224,9 @@
" for z in range((which_task - 1), 10):\n",
" for y in range(1):\n",
" for x in range(4):\n",
" if model == \"odin\":\n",
" if model == \"synn\":\n",
" acc_x.append(acc[x][y][\"task_accuracy\"][z])\n",
" elif model == \"odif\":\n",
" elif model == \"synf\":\n",
" acc_x.append(acc[0][x][y][\"task_accuracy\"][z])\n",
" acc_y.append(np.mean(acc_x))\n",
" acc_x = []\n",
Expand Down Expand Up @@ -267,7 +267,7 @@
"metadata": {},
"source": [
"### Plotting FTE, BTE, TE, and Accuracy\n",
"Run cell to generate a figure containing 4 plots of the forward transfer efficiency, backward transfer efficiency, transfer efficiency, and accuracy of the Odif/Odin algorithms. "
"Run cell to generate a figure containing 4 plots of the forward transfer efficiency, backward transfer efficiency, transfer efficiency, and accuracy of the SynF/SynN algorithms. "
]
},
{
Expand Down Expand Up @@ -305,9 +305,9 @@
"n_tasks = 10\n",
"# clr = [\"#e41a1c\", \"#a65628\", \"#377eb8\", \"#4daf4a\", \"#984ea3\", \"#ff7f00\", \"#CCCC00\"]\n",
"# c = sns.color_palette(clr, n_colors=len(clr))\n",
"if model == \"odin\":\n",
"if model == \"synn\":\n",
" c = \"blue\"\n",
"elif model == \"odif\":\n",
"elif model == \"synf\":\n",
" c = \"red\"\n",
"\n",
"fontsize = 28\n",
Expand Down Expand Up @@ -420,7 +420,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
"version": "3.8.5"
}
},
"nbformat": 4,
Expand Down
14 changes: 7 additions & 7 deletions docs/experiments/fte_bte_flowers.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"\n",
"The progressive learning package utilizes representation ensembling algorithms to sequentially learn a representation for each task and ensemble both old and new representations for all future decisions. \n",
"\n",
"Here, a representation ensembling algorithm based on decision forests (Odif) and an algorithm based on neural networks (Odin) demonstrate forward and backward knowledge transfer of tasks on the Flowers dataset. The original dataset can be found at https://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html.\n",
"Here, a representation ensembling algorithm based on decision forests (SynF) and an algorithm based on neural networks (SynN) demonstrate forward and backward knowledge transfer of tasks on the Flowers dataset. The original dataset can be found at https://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html.\n",
"\n",
"### Import necessary packages and modules"
]
Expand Down Expand Up @@ -156,8 +156,8 @@
"metadata": {},
"outputs": [],
"source": [
"# Choose algorithm (odif or odin)\n",
"model = \"odin\""
"# Choose algorithm (synf or synn)\n",
"model = \"synn\""
]
},
{
Expand Down Expand Up @@ -251,7 +251,7 @@
"metadata": {},
"source": [
"### Plotting FTE, BTE, TE, and Accuracy\n",
"Run cell to generate a figure containing 4 plots of the forward transfer efficiency, backward transfer efficiency, transfer efficiency, and accuracy of the Odif/Odin algorithms."
"Run cell to generate a figure containing 4 plots of the forward transfer efficiency, backward transfer efficiency, transfer efficiency, and accuracy of the SynF/SynN algorithms."
]
},
{
Expand Down Expand Up @@ -289,9 +289,9 @@
"n_tasks = 10\n",
"# clr = [\"#e41a1c\", \"#a65628\", \"#377eb8\", \"#4daf4a\", \"#984ea3\", \"#ff7f00\", \"#CCCC00\"]\n",
"# c = sns.color_palette(clr, n_colors=len(clr))\n",
"if model == \"odin\":\n",
"if model == \"synn\":\n",
" c = \"blue\"\n",
"elif model == \"odif\":\n",
"elif model == \"synf\":\n",
" c = \"red\"\n",
"\n",
"fontsize = 28\n",
Expand Down Expand Up @@ -404,7 +404,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
"version": "3.8.5"
}
},
"nbformat": 4,
Expand Down
6 changes: 3 additions & 3 deletions docs/experiments/fte_bte_food101.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
"\n",
"The progressive learning package utilizes representation ensembling algorithms to sequentially learn a representation for each task and ensemble both old and new representations for all future decisions. \n",
"\n",
"Here, a representation ensembling algorithm based on decision forests (Lifelong Forest) demonstrates forward and backward knowledge transfer of tasks on the food-101 dataset. The original dataset can be found at https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/static/bossard_eccv14_food-101.pdf and https://www.tensorflow.org/datasets/catalog/food101.\n",
"Here, a representation ensembling algorithm based on decision forests (SynF) demonstrates forward and backward knowledge transfer of tasks on the food-101 dataset. The original dataset can be found at https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/static/bossard_eccv14_food-101.pdf and https://www.tensorflow.org/datasets/catalog/food101.\n",
"\n",
"### Import necessary packages and modules"
]
Expand Down Expand Up @@ -256,7 +256,7 @@
"metadata": {},
"source": [
"### Plotting FTE, BTE, TE, and Accuracy\n",
"Run cell to generate a figure containing 4 plots of the forward transfer efficiency, backward transfer efficiency, transfer efficiency, and accuracy of the Lifelong Classification Forest algorithm. "
"Run cell to generate a figure containing 4 plots of the forward transfer efficiency, backward transfer efficiency, transfer efficiency, and accuracy of the SynF algorithm. "
]
},
{
Expand Down Expand Up @@ -358,7 +358,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.8"
"version": "3.8.5"
}
},
"nbformat": 4,
Expand Down
14 changes: 7 additions & 7 deletions docs/experiments/functions/fte_bte_aircraft_bird_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
)
from proglearn.voters import TreeClassificationVoter, KNNClassificationVoter
from sklearn.model_selection import train_test_split
from tensorflow.keras.backend import clear_session # To avoid OOM error when using odin
from tensorflow.keras.backend import clear_session # To avoid OOM error when using synn


def load_tasks(
Expand Down Expand Up @@ -107,13 +107,13 @@ def show_image(train_x_task):


def single_experiment(
train_x_task, test_x_task, train_y_task, test_y_task, ntrees=10, model="odif"
train_x_task, test_x_task, train_y_task, test_y_task, ntrees=10, model="synf"
):
num_tasks = 10
num_points_per_task = 1800
accuracies = np.zeros(65, dtype=float)

if model == "odin":
if model == "synn":

clear_session() # clear GPU memory before each run, to avoid OOM error

Expand Down Expand Up @@ -194,7 +194,7 @@ def single_experiment(
default_voter_kwargs = {"k": int(np.log2(num_points_per_task))}
default_decider_class = SimpleArgmaxAverage

elif model == "odif":
elif model == "synf":
for i in range(num_tasks):
train_x_task[i] = train_x_task[i].reshape(1080, -1)
test_x_task[i] = test_x_task[i].reshape(720, -1)
Expand All @@ -218,7 +218,7 @@ def single_experiment(
X=train_x_task[i],
y=train_y_task[i],
task_id=i,
num_transformers=1 if model == "odin" else ntrees,
num_transformers=1 if model == "synn" else ntrees,
transformer_voter_decider_split=[0.67, 0.33, 0],
decider_kwargs={"classes": np.unique(train_y_task[i])},
)
Expand All @@ -231,12 +231,12 @@ def single_experiment(
if j > i:
pass # this is not wrong but misleading, should be continue
else:
odif_predictions = progressive_learner.predict(
synf_predictions = progressive_learner.predict(
test_x_task[j], task_id=j
)

accuracies[10 + j + (i * (i + 1)) // 2] = np.mean(
odif_predictions == test_y_task[j]
synf_predictions == test_y_task[j]
)
# print('single experiment done!')

Expand Down
Loading

0 comments on commit f2afc14

Please sign in to comment.