forked from intuitive-robots/robot-learning-symposium
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.html
executable file
·374 lines (329 loc) · 14.8 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
<!DOCTYPE html>
<html lang="en">
<head>
<link rel="preconnect" href="https://fonts.googleapis.com">
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
<link href="https://fonts.googleapis.com/css2?family=Roboto:ital,wght@0,100;0,300;0,400;0,500;0,700;0,900;1,100;1,300;1,400;1,500;1,700;1,900&display=swap" rel="stylesheet">
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Robot Learning Symposium Karlsruhe</title>
<style>
body {
font-family: 'Roboto', sans-serif;
margin: 0;
padding: 0;
background-color: hsl(0, 0%, 98%);
}
.header {
position: relative;
background-image: url('images/image.png'); /* Replace with your image URL */
background-size: cover;
background-position: center;
color: #fff;
text-align: center;
padding: 100px 20px;
font-family: 'Roboto', sans-serif; /* Apply Roboto font */
}
.header::before {
content: '';
position: absolute;
top: 0;
left: 0;
width: 100%;
height: 100%;
background: rgba(0, 0, 0, 0.4); /* Dark overlay for better text contrast */
z-index: 1;
font-family: 'Roboto', sans-serif; /* Apply Roboto font */
}
.header h1 {
position: relative;
font-size: 2.8em;
margin: 0;
color: #ffffff;
font-weight: 700;
z-index: 2;
text-shadow: 2px 2px 4px rgba(0, 0, 0, 0.5);
font-family: 'Roboto', sans-serif; /* Apply Roboto font */
}
.header .location {
position: relative;
font-size: 1.2em;
color: #e0e0e0;
margin-top: 10px;
font-weight: 400;
z-index: 2;
font-family: 'Roboto', sans-serif; /* Apply Roboto font */
}
.header p {
position: relative;
max-width: 600px;
margin: 20px auto 0;
font-size: 1.2em;
color: #f0f0f0;
line-height: 1.5;
font-weight: 300;
z-index: 2;
font-family: 'Roboto', sans-serif; /* Apply Roboto font */
}
@media (max-width: 768px) {
.header {
padding: 60px 20px;
}
.header h1 {
font-size: 2em;
}
.header .location, .header p {
font-size: 1em;
}
}
.content {
padding: 20px;
background-color: #f9f9f9;
}
.content h2 {
text-align: center;
color: #004c3f;
}
.schedule {
max-width: 800px;
margin: 0 auto;
}
.schedule-item {
display: flex;
justify-content: space-between;
align-items: center; /* Center vertically */
padding: 10px 0;
border-bottom: 1px solid #ccc;
}
.schedule-item:last-child {
border-bottom: none;
}
.time {
color: #666;
width: 100px;
text-align: right;
margin: 10px;
margin-right: 20px;
}
.event {
width: calc(100% - 100px);
}
.event-title {
font-weight: bold;
margin: 0;
}
.speaker {
color: #004c3f; /* Darker green to match the theme */
font-size: 1em; /* Slightly larger font */
font-weight: bold; /* Bolder font for emphasis */
margin: 5px 0; /* Add spacing around the speaker name */
}
.speakers {
max-width: 1000px;
margin: 0 auto;
padding: 20px 0;
}
.speaker-item {
display: flex;
align-items: flex-start;
margin-bottom: 60px;
flex-wrap: wrap; /* Allows items to wrap on smaller screens */
}
.speaker-image {
width: 25%; /* Default size as a percentage */
max-width: 200px;
height: auto; /* Keeps aspect ratio */
border-radius: 10%;
margin-right: 2%; /* Adds space between image and text */
object-fit: cover;
flex-shrink: 0;
}
.speaker-info {
width: 70%; /* Takes the remaining space next to the image */
max-width: 800px;
}
/* Responsive adjustments for smaller screens */
@media (max-width: 768px) {
.speaker-item {
flex-direction: column; /* Stack items vertically */
align-items: center; /* Center-align items on small screens */
text-align: center;
}
.speaker-image {
width: 50%; /* Adjust image width to half of the container on small screens */
max-width: 150px; /* Limit max width */
margin-right: 0; /* Remove right margin since items are stacked */
margin-bottom: 15px; /* Add spacing below the image */
}
.speaker-info {
width: 90%; /* Make text take up most of the width */
max-width: 100%; /* Allow text to take full width on small screens */
}
}
.speaker-name {
font-size: 1.4em;
font-weight: bold;
color: #004c3f;
margin: 0;
}
.speaker-title {
font-size: 1.1em;
color: #333;
margin-top: 5px;
}
.speaker-bio {
color: #666;
font-size: 0.9em;
margin-top: 10px;
}
.abstract {
color: #333;
font-size: 0.9em;
margin-top: 5px;
}
.talk-title {
font-size: 1.1em;
font-weight: bold;
color: #333;
margin: 3px 0; /* Adds spacing between title and speaker */
}
.white {
color: #ffffff;
font-family: Lato, sans-serif;
}
</style>
</head>
<body>
<div class="header">
<h1 class="white">Robot Learning Symposium</h1>
<p class="location">InformatiKOM Karlsruhe, Atrium
<br> Adenauerring 12, Karlsruhe Institute of Technology
<br>November 12, 2024
</p> <!-- Location line -->
<p class="white">
Current robotic systems have fallen short of public expectations, with most deployed solutions still operating within confined behavioral boundaries.
The transition toward robots functioning seamlessly in everyday environments presents three key challenges that our symposium will explore: enabling flexible task execution through multimodal interactions,
developing computational frameworks that mirror human cognitive flexibility, and creating systems that continuously refine their skills while maintaining operational safety.
<br><br>Our speakers will showcase pioneering approaches that blend controllability, robot learning and exploration, and AI-enabled robot applications to address these fundamental challenges in modern robotics.
</p>
</div>
<div class="content">
<h2>Tentative schedule</h2>
<div class="schedule">
<div class="schedule-item">
<div class="time">10:00</div>
<div class="event">
<p class="event-title">Welcome</p>
</div>
</div>
<div class="schedule-item">
<div class="time">10:10</div>
<div class="event">
<p class="event-title">Talk</p>
<p class="talk-title">AI-enabled Robotics: Towards Real-World Applications</p>
<p class="speaker">Ajinkya Jain (Google Intrinsic)</p>
<p class="abstract">
While Foundation models and VLMs show promise for dexterous robot manipulation,
their real-world applications remain limited, with a significant gap between research prototypes and practical demands.
This talk explores how to bridge this divide and achieve Technology Readiness Level (TRL) 7 and above for AI-powered robots.
We argue this is best achieved by combining these methods with the right hardware and infrastructure tools and focusing on building Artificial Specialized Intelligence (ASI) for specific manipulation domains first.
ASI offers key advantages: reduced data dependency, rapid training, and real-time control capabilities suitable for robotics.
With the right tools and a suite of ASIs, we can construct robust and versatile behavior generation models that are not only data-efficient and high-performing,
but also interpretable and reliable in real-world conditions. </p>
</div>
</div>
<div class="schedule-item">
<div class="time">10:50</div>
<div class="event">
<p class="event-title">Talk</p>
<p class="talk-title">Efficient Robot Learning and Exploration</p>
<p class="speaker">Rika Antonova (University of Cambridge)</p>
<p class="abstract">
In this talk, I will outline ingredients for enabling efficient robot learning.
First, I will demonstrate how large vision-language models can enhance scene understanding and generalization,
allowing robots to learn general rules from specific examples for handling everyday objects.
Then, I will describe a policy learning method that leverages equivariance to significantly reduce the amount of training data needed for learning from human demonstrations.
Moving beyond learning from demonstrations, we will explore how simulation can enable robots to learn autonomously.
I will describe the challenges and opportunities of bringing differentiable simulators closer to reality,
and contrast direct controller optimization in such adaptive simulators with reinforcement learning in 'black-box' simulators. To further expand robot capabilities,
we will consider adapting hardware. In particular, I will demonstrate how differentiable simulation can be used for learning tool morphology to automatically adapt tools for robots.
Finally, I will outline a vision of how new affordable and robust sensors can aid in learning and control,
how rapid prototyping can enable effective design iterations, and how scaling up exploration would let us tackle the vast design space of optimizing sensing, morphology, actuation,
and policy learning jointly.
I will conclude with examples of interdisciplinary collaborations where hardware, control, learning, and vision researchers jointly build solutions greater than the sum of their parts.
</div>
</div>
<div class="schedule-item">
<div class="time">11:30</div>
<div class="event">
<p class="event-title">Lunch Break</p>
</div>
</div>
<!-- <div class="schedule-item">
<div class="time">13:00</div>
<div class="event">
<p class="event-title">Poster Session</p>
<p class="speaker">Robot Learning at KIT</p>
</div>
</div> -->
<div class="schedule-item">
<div class="time">14:00</div>
<div class="event">
<p class="event-title">Talk</p>
<p class="talk-title">Controllability: The Universal Language of Sequential Decision-Making</p>
<p class="speaker">Caleb Chuck (University of Texas at Austin)</p>
<p class="abstract">
Controllability is a fundamental concept in sequential decision-making that transcends specific applications and unifies diverse domains, from robotics to artificial intelligence.
This concept is essential in reinforcement learning and control theory, as it underpins the agent’s ability to learn, adapt, and optimize decisions within complex, dynamic environments.
</p>
</div>
</div>
<div class="schedule-item">
<div class="time">14:50</div>
<div class="event">
<p class="event-title">Final Remarks</p>
</div>
</div>
</div>
<hr style="margin: 3% 0; border: 0; border-top: 4px solid #ccc;">
<h2>Speakers</h2>
<div class="speakers">
<div class="speaker-item">
<img src="images/ajinkya.png" alt="Ajinkya Jain" class="speaker-image">
<div class="speaker-info">
<p class="speaker-name">Ajinkya Jain</p>
<!-- <p class="speaker-title">Title</p> -->
<p class="speaker-bio">
Ajinkya Jain is a senior robotics researcher at Intrinsic, an Alphabet company. His research focuses on developing and applying robot learning methods for dexterous robot manipulation.
Before joining Intrinsic, he received his Ph.D. in Robotics from the University of Texas at Austin,
where he focused on algorithms for learning object interaction models from visual data and robust motion planning strategies for manipulating them under uncertainty.
</p>
</div>
</div>
<div class="speaker-item">
<img src="images/rika.png" alt="Rika Antonova" class="speaker-image">
<div class="speaker-info">
<p class="speaker-name">Rika Antonova</p>
<!-- <p class="speaker-title">Title</p> -->
<p class="speaker-bio">
Rika Antonova is an Associate Professor at the University of Cambridge. Her research interests include data-efficient reinforcement learning algorithms, active learning & exploration, and robotics.
Earlier, Rika was a postdoctoral scholar at Stanford University upon receiving the NSF/CRA Computing Innovation Fellowship from the US National Science Foundation.
Rika completed her PhD at KTH, Stockholm in the division of "Robotics, Perception, and Learning". Earlier, she obtained a research Master's degree from the Robotics Institute at Carnegie Mellon University.
Before that, Rika was a senior software engineer at Google in the Search Personalization team and then in the Character Recognition team (developing open-source OCR engine Tesseract).
</p>
</div>
</div>
<div class="speaker-item">
<img src="images/caleb.png" alt="Caleb Chuck" class="speaker-image">
<div class="speaker-info">
<p class="speaker-name">Caleb Chuck</p>
<!-- <p class="speaker-title">Embodied Multimodal Intelligence with Foundation Models</p> -->
<p class="speaker-bio">
Caleb Chuck recently defended his PhD thesis at Computer Science Department of the University of Texas at Austin. He is part of the Personal Autonomous Robotics Lab (PeARL) with Professor Scott Niekum.
His research focuses on better understanding how robots can complement humans. He develops hierarchical object-centric methods to improve robotic manipulation.
</p>
</div>
</div>
</div>
</body>
</html>