08/2025:
🥳
Our paper "Cross-Sensor Touch Generation" is selected as an oral presentation at CoRL 2025!
08/2025:
🎉
One paper accepted to CoRL 2025! See you in Seoul!
02/2025:
🎉
One paper accepted to CVPR 2025! See you in Nashville!
01/2025:
🎉
Two papers accepted to ICRA 2025! See you in Atlanta!
09/2024:
🥳
Selected as an Outstanding Reviewer for ECCV 2024!
02/2024:
🎉
Three papers accepted to CVPR 2024! See you in Seattle!
Research Interests
Humans perceive the world with multiple senses,
based on which we establish abstract concepts to understand it.
From the concepts we develop logical reasoning ability,
and thus creating brilliant achievements.
Inspired by this,
my dream is to design human-like multisensory intelligent systems,
which can be divided into four specific problems:
Multimodal Perception:
how to perceive and model
the multimodal physical world.
Concept Learning:
how to abstract the perceived information into high-level concepts.
Reasoning:
how to perform causal reasoning on the basis of concepts.
Robot Learning:
how to enable robots to actively interact with the real-world environments and humans.
We learn to translate touch signals captured from one touch sensor to another, which allows us to transfer object manipulation policies between sensors.
We introduce a benchmark suite for multisensory object-centric learning with sight, sound, and touch.
We also introduce a dataset including the multisensory measurements for real-world objects
We learn to translate touch signals captured from one touch sensor to another, which allows us to transfer object manipulation policies between sensors.
We learn to translate touch signals captured from one touch sensor to another, which allows us to transfer object manipulation policies between sensors.
We introduce UniTouch, a unified tactile representation for vision-based touch sensors connected to multiple modalities, including vision, language, and sound.
We introduce a benchmark suite for multisensory object-centric learning with sight, sound, and touch.
We also introduce a dataset including the multisensory measurements for real-world objects
Cornell Tech
2025.08 ~ Present
New York, U.S.
Ph.D. Student in Computer Science
Advisor: Prof. Andrew Owens
University of Michigan
2023.08 ~ 2025.08
Ann Arbor, U.S.
M.S.E. in Computer Science and Engineering
Advisor: Prof. Andrew Owens
Completed first two years of Ph.D. before transferring to Cornell
Stanford University
2022.03 ~ 2023.04
Stanford, U.S.
Visiting Research Intern
Supervisor: Prof. Ruohan Gao,
Prof. Jiajun Wu
and Prof. Fei-Fei Li
Shanghai Jiao Tong University
2019.09 ~ 2023.06
Shanghai, China
B.Eng. (Honors) in Computer Science and Technology
B.Ec. (Minor) in Economics
Member of Zhiyuan Honors Program
Supervisor: Prof. Cewu Lu and Prof. Yong-Lu Li
As a person working on building multisensory systems,
I also enjoy being a multisensory embodied agent outside of work:
👁 Photography:
I've been learning to take photos since I was 7 years old,
and have been fortunate to capture some impressive moments along the way.
See some of them here!
👂 Classical music:
I love listening to classical music, especially those from
the Viennese Classic period to the Romantic period.
(alphabetical order)
Beethoven: Piano Concerto No. 3 in C Minor
Beethoven: Symphony No. 7 in A Major
Brahms: Symphony No. 1 in C Minor
Mahler: Symphony No. 1 in D Major "Titan"
Mendelssohn: Piano Concerto No. 1 in G Minor
Saint-Saëns: Piano Concerto No. 2 in G Minor
💪 Tennis:
Despite having been playing for 2+ years,
I still regard myself as a beginner -- probably around NTRP level 3.0? --
but I really enjoy it and look forward to getting better!