Project
This project is a real-time action classification mobile application developed for iOS devices using Swift. It leverages machine learning and CreateML to predict actions like jab, cross, and hooks based on the recognized body pose points obtained from Apple's Vision framework. The app uses AVFoundation to capture live video frames from the device's camera and processes them through a custom-built action classification model using CreateML. When a jab, cross, or hooks action is detected with high confidence, the app triggers an alert sound to notify the user. The user interface displays recognized body pose points as green dots, providing real-time visual feedback during action classification. I am currently working on a user interface, recognizing action combinations and tracking combination accuracy log on a backend server.
Technologies
Swift
createML
Machine Learning
Apple Vision
AV Foundation
Back