Puppeteering AI

Summary

Puppeteering AI creates an artificial dancer whose movements are generated through a combination of interactive control and machine learning. Puppeteering AI builds on top of research that has been conducted on the context of an earlier project entitled Granular Dance. Puppeteering AI explores interactive applications of an autoencoder that has been trained on motion capture recordings of a human dancer. In Granular Dance, synthetic motions were generated by navigating the autoencoder’s latent space. This paper proposes an alternative approach. This approach controls the generation of synthetic motions on the level of the motion itself rather than its encoding. Two different methods are presented that follow this principle. Both methods are based on the interactive control of a single joint of an artificial dancer while the other joints remain under the control of the autoencoder. The first method combines the control of the orientation of a joint with iterative autoencoding. The second method combines the control of the target position of a joint with forward kinematics and the application of latent difference vectors.

This project has been realised in collaboration with Ephraim Wegner, teacher and researcher at Offenburg University, Germany. A detailed description of the research has been published here and here.

Interaction Principle based on Joint Orientation Control

This method controls the orientation of a selected joint relative to its parent joint. Since poses are represented as joint orientations, the input pose sequence used for autoencoding can be directly overwritten. By autoencoding a modified pose sequence, the interactively controlled joint orientations affect the orientations of the remaining joints. By iterating the autoencoding process several times while keeping the orientation of the interactively controlled joint fixed, the autoencoder convergences towards a pose sequence that is representative of both the original mocap material and the interactively controlled joint orientation. The number of iterations used for autoencoding and the number of iterations during which the interactively controlled joint is fixed do not necessarily have to be the same. If they are the same, then the joint orientation specified through interaction is fully visible in the final pose sequence. If they are not, then the interactively controlled joint orientation changes as result of the autoencoding process.

Interaction Principle based on Joint Position Control

This method controls the position of a selected joint relative to the position of a root joint (the hip). Since poses are represented by orientations, the positions of joints need to be obtained through forward kinematics. The encodings of all those pose sequences are collected in which the selected joint is either close to its current position, or close to the intended target position. In order to maintain real-time performance, the encodings of all pose sequences in a given motion capture recording are precomputed and stored alongside the positions of the selected joint. These positions are organized in a spatial partitioning structure (a KD-Tree in this case) to quickly retrieve the encodings based on the Euclidean distance between joint positions. Following this principle, a fixed number of encodings is obtained for the current position and the target position of a selected joint, respectively. The encodings are then averaged and subtracted from each other to obtain the vector difference to be added to the encoding of the current pose sequence. The modified encoding is then decoded to obtain a pose sequence in which the selected joint is close to the intended target position.

Evaluation Joint Orientation Control

To evaluate the effect of the parameters for joint orientation control,
the number of iterations for autoencoding and for overwriting
the orientation of a selected joint were varied within range from
1 to 10 for each. The right shoulder was chosen as selected joint and its relative orientation was set to the quaternion equivalent of the Euler angles 0.0°, 0.0°, -120°.

Variation of Joint Orientation Control Parameters. The left and right animation differ with respect to the chosen motion capture excerpt.

To evaluate the effect of the joint orientation, the third Euler angle was varied between 180.0°and -120,0°while the first two Euler angles were kept constant at 0.0°. The control parameters were set to 5 iterations both for autoencoding and for overwriting the orientation of the selected joint. Again, the right shoulder was chosen as selected joint.

Variation of Joint Orientation. The left and right animation differ with respect to the chosen motion capture excerpt.

Evaluation Joint Position Control

To evaluate the effect of the parameters for joint position control, the number of neighboring joint positions for calculating the latent vector difference between their corresponding pose sequence encodings, and the scaling of the latent vector difference were varied, with the former parameter ranging from 2 to 20 and the latter ranging from -1.0 to +1.0 (see figure 2). The right hand was chosen as selected joint and its target position relative to the hip joint was set to x -60.0, y 0.0, z 0.0.

Variation of Joint Target Position Control Parameters. The left and right animation differ with respect to the chosen motion capture excerpt.

The evaluate the effect of the relative target joint position, the x and y coordinates where varied between -60.0 and +60.0, and -120.0 and +120.0, respectively while the z-coordinate was kept constant at 0.0. The control parameters were set to 20 for the number of neighboring joint positions and 1.0 for the scaling of the latent vector difference. Again, the right hand was chosen as selected
joint.

Variation of Joint Target Position. The left and right animation differ with respect to the chosen motion capture excerpt.

Digital Instrument

To experiment with the application of an artificial dancer in a performance situation that combines dance and music, a digital instrument has been developed. The instrument employs physical modeling synthesis that simulates a vibrating surface via a bank of resonating filters following a modal synthesis approach. For Puppeteering AI, the filters are arranged in a cylindrical formation consisting of ten vertically stacked rings with ten filters each. This formation surrounds the artificial dancer at a distance that can be reached by its extremities. Playing this instrument is based on the artificial dancer approaching the filters with some of its joints.

Artificial Dancer Playing the Digital Instrument
Artificial Dancer Player the Digital Instrument V2

This website was created using Coventry.Domains, a service of the Coventry University Group. The information, views, opinions and discussion contained on this website are those of the author(s) and do not necessarily reflect the views and opinions of the Coventry University Group. For more information on privacy, cookies, takedown requests and more, visit our policies page.