Team Fanatics
Team Leader: Arpit Kumar
Project: Marker-less Motion Capture
Problem Statement:
The increase of the influence of the entertainment and cinema industry, there is an exponential demand of Digital artists (VFX, 3D modelling, 3d animation etc). This comes with a need of motion capturing for making digital effects.
Motion Capturing techniques demand a lot of capital and resources in form of ‘MO-CAP’ suits, and heavy grade equipment, that beginner enthusiasts can not afford plus they are too complicated to use.
Proposed Solution:
To create a program that would track the body movements of a human and map it into a skeleton rig, that then can be used for various purposes, using just a camera.
The methodology used here is to use Computer Vision to enable the computer to read the video inputs from camera of the by uploading video file from the storage of the device manually, and Game development in Unity3D to integrate the 3D animation with the Computer Vision
INPUT VIDEO
Media Pipe algorithm
Extraction of coordinates in a .txt file
Unity 3D model
Through Tkinter GUI and OpenCV
Through C# scripts in Unity
USE CASES:
Gaming:
Education:
Commercial Advertising:
Creature Animation:
Cartoon and Animated Feature Films:
Educational Animation:
Anatomy and Physiology: Body movement tracking is utilized in educational animations to accurately represent human anatomy and physiology, providing students with a visual and
Web Series and Online Content:
Digital Content Creation: Motion capture technologies are increasingly accessible, enabling independent creators to incorporate realistic animations into web series, online content, and short films.
ADVANTAGES
Ensures that the application runs seamlessly across different operating systems, enhancing accessibility and usability.
A user can make animation via uploading a video from a device into the program, that will be passed through the various functions of OpenCV with the help of Mediapipe.
Mediapipe will then identify the 32 landmarks in the video that represents a human body and will pass their coordinates into a Text (.txt) file which can be used in a pre made Unity 3D project.
As OpenCV provides inputs via camera of user’s device we use this abilty to live track the movements of a person’s body to extract the coordinates of the 32 mediapipe landmarks and then deploying them to Unity3D.
Dependencies:
2.It needs Unity 3D for the
implementation.
3. It requires latest version of
OpenCv and Tkinter.
Tech Stack: