1 of 3

Team Fanatics

Team Leader: Arpit Kumar

Project: Marker-less Motion Capture

Problem Statement:

The increase of the influence of the entertainment and cinema industry, there is an exponential demand of Digital artists (VFX, 3D modelling, 3d animation etc). This comes with a need of motion capturing for making digital effects.

Motion Capturing techniques demand a lot of capital and resources in form of ‘MO-CAP’ suits, and heavy grade equipment, that beginner enthusiasts can not afford plus they are too complicated to use.

Proposed Solution:

To create a program that would track the body movements of a human and map it into a skeleton rig, that then can be used for various purposes, using just a camera.

The methodology used here is to use Computer Vision to enable the computer to read the video inputs from camera of the by uploading video file from the storage of the device manually, and Game development in Unity3D to integrate the 3D animation with the Computer Vision

 

2 of 3

INPUT VIDEO

Media Pipe algorithm

Extraction of coordinates in a .txt file

Unity 3D model

Through Tkinter GUI and OpenCV

Through C# scripts in Unity

  • The video of the target is given by the user either live or pre-recorded
  • Video is run through the media-pipe algorithm to track certain 33 landmarks on a human body .
  • The landmarks coordinates are then saved into a text file.
  • A C# script in Unity reads the file and implements it on a 3D model that causes the model move the way it does in the input video.

USE CASES:

Gaming:

  • Gesture-Based Gaming: Body movement tracking enables users to control and interact with games using natural gestures. This immersive experience enhances user engagement and brings a new dimension to gaming.
  • Virtual Reality (VR): In VR applications, body movement tracking enhances the sense of presence by allowing users to navigate virtual environments using their natural body movements

Education:

  • Interactive Learning: Body tracking facilitates interactive learning environments where students can engage with digital content through physical gestures. It enhances understanding and retention of educational material.
  • Physical Education: In physical education settings, tracking body movements can provide valuable insights into students' performance in various activities.

 

Commercial Advertising:

  • Product Presentations: Companies use motion capture to create animated product presentations where characters interact with and demonstrate products in a lifelike manner.

Creature Animation:

  • Animal and Fantasy Creatures: Body movement tracking is not limited to human characters; it is also used to animate animals and fantastical creatures. Animators can reference real animal movements or create unique, otherworldly animations.

Cartoon and Animated Feature Films:

  • Expressive Character Animation: Body movement tracking provides animators with a reference for creating expressive and dynamic movements on animated characters, ensuring a more engaging narrative.
  • Dance Sequences: Animators use motion capture to create intricate dance sequences in animated films, capturing the fluidity and precision of human movement.

Educational Animation:

 Anatomy and Physiology: Body movement tracking is utilized in educational animations to accurately represent human anatomy and physiology, providing students with a visual and

  • interactive learning experience.

Web Series and Online Content:

 Digital Content Creation: Motion capture technologies are increasingly accessible, enabling independent creators to incorporate realistic animations into web series, online content, and short films.

3 of 3

ADVANTAGES

  1. Video Input Processing:
    • Utilizes OpenCV for capturing and processing video frames from a camera or device.
    • Integrates Mediapipe for advanced computer vision tasks such as hand tracking, pose estimation, or keypoint detection.

 

  1. Unity3D Integration:
    • Transforms processed video input into dynamic 3D animations within Unity3D.
    • Establishes a communication channel between the Python application and Unity3D for seamless data transmission.

  

  1. Cross-Platform Compatibility:

Ensures that the application runs seamlessly across different operating systems, enhancing accessibility and usability.

 

  1. Multiple ways of input:
      • By Uploading Video:

A user can make animation via uploading a video from a device into the program, that will be passed through the various functions of OpenCV with the help of Mediapipe.

Mediapipe will then identify the 32 landmarks in the video that represents a human body and will pass their coordinates into a Text (.txt) file which can be used in a pre made Unity 3D project.

 

      • Live tracking and Animating via Camera:

As OpenCV provides inputs via camera of user’s device we use this abilty to live track the movements of a person’s body to extract the coordinates of the 32 mediapipe landmarks and then deploying them to Unity3D.

 

  1. Future Development Considerations:

 

    • Outlines possibilities for future enhancements or features that could be integrated into the application.
    • Identifies areas for potential expansion, such as additional computer vision functionalities or enhanced user customization options.

 

Dependencies:

  1. It still requires a premade 3D model especially for the program.

2.It needs Unity 3D for the

implementation.

3. It requires latest version of

OpenCv and Tkinter.

Tech Stack: