Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

MotionFix: Text-Driven 3D Human Motion Editing

Published in SIGGRAPH Asia 2024, 2024

The MotionFix dataset is the first benchmark for 3D human motion editing from text. It contains triplets of source and target motions, and edit texts that describe the desired modification. Our dataset allows both training and evaluation of models for text-based motion editing. TMED is a conditional diffusion model trained on MotionFix to perform motion editing using both the source motion and the edit text.

WANDR: Goal-Reaching Human Motion Generation

Published in CVPR 2024, 2024

WANDR is a conditional Variational AutoEncoder (c-VAE) that generates realistic motion of human avatars that navigate towards an arbitrary goal location and reach for it. Input to our method is the initial pose of the avatar, the goal location, and the desired motion duration. Output is a sequence of poses that guide the avatar from the initial pose to the goal location and place the wrist on it.

NIL: No-data Imitation Learning by Leveraging Pre-trained Video Diffusion Models

Published in arXiv 2025, 2025

NIL introduces a data-independent approach for motor skill acquisition that learns 3D motor skills from 2D-generated videos, with generalization capability to unconventional and non-human forms. We guide the imitation learning process by leveraging vision transformers for video-based comparisons and demonstrate that NIL outperforms baselines trained on 3D motion-capture data in humanoid robot locomotion tasks.

talks

teaching

Stochastics and Machine Learning

Undergraduate course, ETH Zurich, Department of Computer Science, 2025

Teaching Assistant for Stochastics and Machine Learning (252-0870-00L) at ETH Zurich during Spring 2025 semester.