Published using Google Docs
HOBGOBLIN General info- TokyoJab Nonsense
Updated automatically every 5 minutes

I have been trying to brush up on my Blender skills and this was part of that so I'll start with that stuff.

But apart from the digital head shape, everything else artistic was done in Automatic1111 with my usual method and some small changes. You can skip the blender stuff and wear a box on your head with ping pong balls stuck to it, it will work the same way when you get the AI to draw over it. I also added links to A.R stuff that give a similar effect and can be drawn over with stable diffusion.

The Blender part:

I was playing with the Live Link Face app by Unreal. It is meant for use with metahumans in Unreal engine but I don't have much experience with that. However I noticed that it saves out a csv file that is quite readable and contain FACS morph information. These are the same standard morphs used by Apple ARKIT which I have created before.The iphone app is free and can record up to about 2 metres (5-6 feet) from the camera at 60fps. Which was plenty for my test. It saves the video and the CSV data into a zip file.I spent half a day with ChatGPT coming up with a python script I could run in Blender that would read this csv information and with a few small changes transfer it to a head (see end of doc). The head needs to have the standard FACS morphs as shape keys. You can find models with this online. Found info here. After the sixth iteration it worked and all the face data from the iphone recording was transferred to the model. Note: There is a free plugin that does something similar but I like the control of using my own code.Using blender on my original video I then tracked my head position using my ears and hair and nose. NOT the mouth or chin or eyes as these move around a lot when being expressive.That gave me a general head location and rotation in 3D space and so use that to position my digital head.I used EEVEE to render out the head pics as quickly as possible.

Automatic1111:

In Automatic1111 I used segment anything and grounding dino on the video to mask out the Hands in every frame, then the Head, then the Shirt. Grounding dino lets you use English to do this. Very handy. I use my usual method on each of these separate pieces (The temporal consistency grid method). The reason I separated them is so that I can use less keyframes overall. The AI draws a lot of extra info (see Mr Moustache in the brown leather shirt grid) but I mask out all of that rubbish later.

Quick note: For the first time in one of my projects I used LCMs exclusively. It meant I could do those huge grids in just a few minutes. I usually use the Depth extension to give me depth info for each of my keyframes but this time I used Marigold. Its depth maps are way more crisp and ControlNet can use them more accurately (make sure if you are pasting in a premade depth grid that you turn off the preprocessor)

Finally I used after effects to composite it all back together, using the masks that Auto1111 gave me to make pieces that I then put back together. You could use Blender for compositing this back together too but I am no good at that part so I stuck to what I know.

If you want to try using digital head props without having to use Blender and all the technical stuff then I made some apps a few years ago that are free with no ads. HORROR heads here and CHRISTMAS heads here. You need an iphone though.

Here is the code I am using, it transfers the CSV mocap data into a model that contains ARKIT blendshapes.

import bpy

import csv

# Path to the CSV file

csv_file_path = 'path_to_your_csv_file.csv'

# Prefix to remove from the shape key names

prefix_to_remove = 'Genesis8Male__'

# Shape keys to be doubled

shape_keys_to_double = ['headroll', 'headyaw', 'headpitch']  # Lowercase for comparison

# Ensure the selected object is the one with shape keys

obj = bpy.context.object

if not obj or not obj.data.shape_keys:

    raise Exception("Please select an object with shape keys.")

# Switch to Object Mode

bpy.ops.object.mode_set(mode='OBJECT')

print(f"Found {len(obj.data.shape_keys.key_blocks)} shape keys in the object.")

# Read the CSV file

with open(csv_file_path, newline='') as csvfile:

    reader = csv.DictReader(csvfile)

    # Convert CSV column headers to lower case for comparison

    csv_columns = {col.lower(): col for col in reader.fieldnames}

    # Iterate over each row in the CSV file

    for i, row in enumerate(reader):

        # Set the current frame (starting from 1)

        bpy.context.scene.frame_set(i + 1)

        # Iterate over each blend shape key in Blender

        for key in obj.data.shape_keys.key_blocks:

            # Remove the prefix and convert to lower case for comparison

            key_name_modified = key.name.lower()

            if key_name_modified.startswith(prefix_to_remove.lower()):

                key_name_modified = key_name_modified[len(prefix_to_remove):]

            if key_name_modified in csv_columns:

                # Get the original column name with the correct case

                original_col_name = csv_columns[key_name_modified]

                # Get the value from the CSV

                value = float(row[original_col_name])

               

                # Double the value for specific shape keys

                if key_name_modified in shape_keys_to_double:

                    value *= 2

                # Set the value of the shape key

                key.value = value

                # Insert keyframe

                key.keyframe_insert(data_path="value", frame=i + 1)

                print(f"Keyframe set for '{key.name}' at frame {i + 1} with value {key.value}")

            else:

                print(f"No matching column found in CSV for shape key '{key.name}'")

print("CSV data import and keyframe creation complete.")