# Modern spatial computing uses hand tracking as primary input

**Date:** 2025-12-18  
**Tags:** XR, UI, Gestures  
**URL:** https://kelexine.is-a.dev/til/hand-tracking-spatial-ui

---

TIL: Modern spatial computing uses hand tracking as primary input. The key gestures are pinch (select), pinch-and-drag (move), tap (click), and gaze+pinch (look-to-select). No controllers needed - your hands are the interface. Vision Pro and Quest 3 both support controller-free interaction.


```javascript
// Hand tracking gesture detection
const gesture = detectGesture(handPose);
switch(gesture) {
  case 'pinch': handleSelect(gazeTarget); break;
  case 'pinch_drag': handleDrag(delta); break;
  case 'palm_up': showMenu(); break;
  case 'point': castRay(fingerDirection); break;
}
```




---

*This content is available at [kelexine.is-a.dev/til/hand-tracking-spatial-ui](https://kelexine.is-a.dev/til/hand-tracking-spatial-ui)*
