Tactile Generation and Gesture Recognition

This suite of algorithms enables scalable cross-modal tactile generation (vision to touch), intra-modal tactile translation (touch to touch), and tactile gesture recognition. NeRF-based rendering supplies viewpoint-consistent RGB-D data; a cGAN performs visuo-tactile translation (TactileGen); and Touch2Touch modules transform tactile images between sensors, conditions, or domains.
Temporal models classify touch gestures such as taps, press sequences, and sliding motions.
Features
- Zero-shot tactile generation from NeRF-rendered RGB-D
- Tactile-to-tactile generative translation
- High-fidelity tactile images for camera-based sensors
- Robustness to geometric transformations (rotations, reflections)
- Sensor-fault adaptation via background conditioning
- Scalable dataset generation from simulation
- Supports downstream tasks: classification, touch interfaces, surface exploration
- Gesture recognition using TCN/Transformer sequence model
Videos
Gesture recognition with ToF sensor.
Gesture regonition mini-demo.