Alessio Grancini


Originally from Italy and based in LA, I am a Prototype Engineer at Magic Leap. 

This website is a selection of media and code samples from my personal creative development journey. 

For private lessons, mentorship and XR consultation shoot an e-mail to alessio.grancini@gmail.com




︎Los Angeles, CA

 



︎

CV

︎

︎

︎ 

︎ Work

︎ Folios

︎ Press 

︎ Book





︎From Augmented Reality to Virtual Production



Tags :
︎ R&D,  Investigation

Role :
Developer


︎ 2020 12
 
︎ Project Breakdown  

“from augmented reality to virtual production” is a personal investigation on social media body filters and their potential as realtime post production tool.
the investigation features 3 different filters 

1.  focus on body keypoint 
2. focusing on algorithmic beat mapping 
3. focusing on virtual clothing 

︎︎︎establishing an ar environment 

.pose detection : square distance detection between keypoints
.body tracking and face tracking
.occlusion maps
.speaker detection
.audio detection : algorithmic beat mapping





︎ filter 1 [focusing on body keypoint]

the first filter included the following features

. identification of keypoints distances to trigger effects 
. identification of the “drop” of the song to trigger the occlusion effect 

*I can’t really dance

︎ Code Highlights :

1. modifying the native unity occlusion shader 
[at the time I started the project the body occlusion was shifted from the body] 

2. detecting the audio from the device microphone 

3. switching scene on audio level peak
[at the time I started the project body tracking 3D and body occlusion could have not run simultaneuosly, souce apple forum]

4-5-6. particles integration
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv;
if (_ONWIDE == 1)
{
o.uv1 = float2(v.uv.x, (1.0 - ( _UVMultiplierLandScape )) + (v.uv.y /
_UVMultiplierLandScape));
o.uv2 = float2(lerp(1.0 - o.uv1.x, o.uv1.x, _UVFlip), lerp(o.uv1.y, 1.0 - o.uv1.y,
_UVFlip));
}
else
    {
o.uv1 = float2((1.0 - ( _UVMultiplierPortrait )) + (v.uv.x / _UVMultiplierPortrait),
v.uv.y);
o.uv2 = float2(lerp(1.0 - o.uv1.y, o.uv1.y, 0), lerp(o.uv1.x, 1.0 - o.uv1.x, 1));
}
return o;
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
float levelMax = 0;
for ( int i = 0; i < dec; i++)
{
float wavePeak = waveData[i] * waveData[i];
if (levelMax < wavePeak)
{
levelMax = wavePeak;
}
}
level = Mathf.Sqrt(Mathf.Sqrt(levelMax));
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
void FixedUpdate ()
{
currentScene = SceneManager.GetActiveScene();
if (sd.level > 0.6 && SceneManager.GetSceneByName( "2_occlusion" ).isLoaded == false )
{
{
SceneManager.LoadScene(2);
}
}
else if (sd.level <= 0.6 && SceneManager.GetSceneByName( "1_body" ).isLoaded == false )
{
timer += Time.deltaTime;
if (timer > waitingTime)
{
timer = 0f;
SceneManager.LoadScene(1);
}
}
DontDestroyOnLoad( this .gameObject);
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
void Start ()
{
// CREATE PARTICLE SYSTEM
// create game object
trail = new GameObject();
// create animation curve
AnimationCurve curve = new AnimationCurve();
curve.AddKey(0.0f, 0.0f);
curve.AddKey(1.0f, 1.0f);
// add trail to go
tr= trail.AddComponent<TrailRenderer>();
// set trail renderer parameters
tr.time = life;
tr.material = new Material(Shader.Find( "Sprites/Default" ));
tr.widthCurve = curve;
tr.widthMultiplier = widthMult;
tr.numCapVertices = 1;
tr.numCornerVertices = 1;
float alpha = 0.5f;
// build gradient
Gradient gradient = new Gradient();
gradient.SetKeys(
new GradientColorKey[] { new GradientColorKey(Color.magenta, 0.0f), new
GradientColorKey(Color.cyan, 1.0f) },
new GradientAlphaKey[] { new GradientAlphaKey(alpha, 0.0f), new GradientAlphaKey(alpha,
1.0f) }
);
tr.colorGradient = gradient;
}
1
2
3
4
5
6
7
8
void Update () {
Vector3 checkDistance03 = rightShoulder.position - leftHand.position;
float checkDistance03sqr = checkDistance03.sqrMagnitude;
if (!isCoroutineStarted && !activateFlatParticles && checkDistancer < threshold)
{
StartCoroutine(ActivateParticles(flatParticles));
}
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
IEnumerator PathTrail ( Transform[]PositionList,Transform movingObject)
{
isCoroutineStarted = true ;
movingAcross = true ;
for ( int i = 0; i < PositionList.Length; i++)
{
movingObject.position= PositionList[i].position;
//
yield return new WaitForSeconds (0.1f);
}
isCoroutineStarted = false ;
movingAcross = false ;
yield return null ;
}
︎ filter 1 - variant - TF Lite

Testing TF Lite : PostNet I approached the problem slightly different and more natively 

i used two pre trained neural networks, PoseNet and DeepLab. PoseNet allows for pose tracking, which takes a sinput the camera stream and generates a full mapping of all major body junctions on the camera frame itself. this allowed me to have an array of junction connections that I can then associate to a unity trailRenderer on its 3D world positions

meanwhile the first solution majorly built on the available elemnts in unity, in this second approach i created the occlusion filter based on the segmentation available from the pre trained nn









AR Foundation vs TF Lite
︎ Code Highlights :

the neural network returns 16 connections, and thus 32 junctions. I associated a renderer’s position to the position of the junction on the screen. This required me to modify the renderer’s transform position in real time to make sure that, if the network had enough confidence about one of the three selected junction connections, a TrailRenderer would appear on the back of the user

finally, similarly to the ARFoundation implementation, I wanted the trails to be activated in correspondence to the music and in the background. I realized that I could have set up an audio bitmap, which is an array with 16 boolean values that turn on or off if specific, predefined audio thresholds are hit

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
public static readonly Part[,] Connections = new Part[,]
{
11
// HEAD
{ Part.LEFT_EYE, Part.RIGHT_EYE },
{ Part.LEFT_EYE, Part.NOSE },
{ Part.NOSE, Part.RIGHT_EYE },
{ Part.RIGHT_EYE, Part.RIGHT_EAR },
// BODY
{ Part.LEFT_HIP, Part.LEFT_SHOULDER },
{ Part.LEFT_ELBOW, Part.LEFT_SHOULDER },
{ Part.LEFT_ELBOW, Part.LEFT_WRIST },
{ Part.LEFT_HIP, Part.LEFT_KNEE },
{ Part.LEFT_KNEE, Part.LEFT_ANKLE },
{ Part.RIGHT_HIP, Part.RIGHT_SHOULDER },
{ Part.RIGHT_ELBOW, Part.RIGHT_SHOULDER },
{ Part.RIGHT_ELBOW, Part.RIGHT_WRIST },
{ Part.RIGHT_HIP, Part.RIGHT_KNEE },
{ Part.RIGHT_KNEE, Part.RIGHT_ANKLE },
{ Part.LEFT_SHOULDER, Part.RIGHT_SHOULDER },
{ Part.LEFT_HIP, Part.RIGHT_HIP }
};
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
void DrawTrails (){
// get camera angles
var rect = cameraView.GetComponent<RectTransform>();
rect.GetWorldCorners(corners);
Vector3 min = corners[0];
Vector3 max = corners[2];
// get pose connections (junctions between components)
var connections = PoseNet.Connections;
int len = connections.GetLength(0);
// iterate over junctions
for ( int i = 0; i < len; i++) {
var a = results[( int )connections[i, 0]];
var b = results[( int )connections[i, 1]];
// if enough confidence, go ahead and draw particles
var level = GetAudio();
var audioBitMap = GetAudioBitmap(level);
if (a.confidence >= threshold && b.confidence >= threshold && audioBitMap[i]) {
//
Vector3 pos = MathTF.Lerp(min, max, new Vector3(a.x,1f - a.y , -824));
tr1.transform.position = pos;
}
}
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
bool [] GetAudioBitmap ( float level){
bool [] arr = new bool [16];
// 0.5 -> body -> 15
// 0.6 -> eyes -> 1 and 2
// 0.8 -> hands -> 6 and 11
if ( level < 0.2f && level > 0.1f){
arr[15] = true ;
}
if ( level < 0.3f && level > 0.2f ){
arr[1] = true ;
arr[2] = true ;
}
if ( level > 0.3f ){
arr[6] = true ;
arr[11] = true ;
}
return arr;
︎ filter 2 - focusing on algorithmic beat mapping

.this filter space visualizes some of the output values, starting from an FFT or Fast Fourier Transform where an audio signal is converted from its original domain to a representation in the frequency domain and vice versa

see this article 

.the selected peaks are averaged in segments of time through the song and we earn a very high level value that determines when the beat happens in the song, this applies to different frequencies
in other words, this is just a way to be more accurate in “hitting” beats during the song and capturing different ranges within the data we are retrieving

.my diagrammatic effort to visualize what happens in the intervals dictated by the boolean condition shows how every bool condition corresponds to disabling the current active effect and randomly enabling new one

.if we start to take into consideration the time between one bool and another one and warp the timing and velocity of the looping stencil animation, we could add more taste and juxtaposition between visual effects and audio






︎ filter 2 - focusing on algorithmic beat mapping
︎ filter 2 - focusing on algorithmic beat mapping

. same application but randomly changing background and foreground






︎ filter 2 - focusing on algorithmic beat mapping

the right video is slowed down 
︎ Code Highlights :

Analyzing the graph and delaying the trigger

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
public class ValueAndSwitch : MonoBehaviour
{
    public bool prevBool = false;
    public bool testCheckValue = false;
    public List<GameObject> childrens = new List<GameObject>();
    private float timer;
    private float waitfortimer;
    void Start()
    {
        timer = 0f;
        waitfortimer = 0.8f;
        foreach (Transform child in transform)
        {
            childrens.Add(child.gameObject);
        }
    }
    void Update()
    {
        testCheckValue = GameObject.FindGameObjectWithTag("PlotController").GetComponent<PlotController>().peakSwitch;

        if (testCheckValue != prevBool)
        {
            timer += Time.deltaTime;
            if (timer > waitfortimer)
            {
                timer = 0;
                DisableEffect();
                EnableOneEffect();
                prevBool = testCheckValue;
            }
        }
    }

    void DisableEffect()
    {
        for (int i = 0; i < childrens.Count; i++)
        {
                childrens[i].SetActive(false);
        }
    }

    void EnableOneEffect()
    {
        int index = Random.Range(0, childrens.Count);
        childrens[index].SetActive(true);
    }
}
︎ filter 3 - focusing on virtual clothing, customization and remote communication 

In this last approach i tried to combine the previous investigation for the use case of virtual clothing, assuming that a fashion director is staging a fashion show in real time and talking to collaborators

using blender, i rigged a selection of garments and use them with the ar foundation body tracking





︎ filter 3 - adding physics 




︎ filter 3 - AR fashion test