Beyond the Mouse: A Real-Time Gesture-Controlled AI System (Lì Ào Engine)
At 3 AM in a freezing dorm room in China, staring at a terminal full of Python logs—I asked myself a simple question: What if the air around me was the interface? That question led me to build Lì À...

Source: DEV Community
At 3 AM in a freezing dorm room in China, staring at a terminal full of Python logs—I asked myself a simple question: What if the air around me was the interface? That question led me to build Lì Ào Engine (利奥) — a multimodal Human-Computer Interaction (HCI) system that transforms gestures, voice, and intent into real-time digital actions. Philosophy: From Constraint to Freedom Traditional input devices—mouse and keyboard—are powerful, but limiting. They confine interaction to surfaces. I wanted to break that boundary. Lì Ào Engine is built on a simple idea: Human intention should be the interface. System Architecture (Clean & Scalable) This is not a prototype-level project. I designed it with modularity and scalability in mind. Core Structure -src/system → Core rendering layers (AI, Board, Layers, Media, Remote) -src/features → Independent modules (ai, draw, galasy, gesture, image, move, prediction, remote, upload) -src/hooks → Custom hooks for performance-critical logic -useImage