{"id":81,"date":"2021-12-05T18:20:03","date_gmt":"2021-12-05T18:20:03","guid":{"rendered":"https:\/\/wp.coventry.domains\/e2create\/?page_id=81"},"modified":"2022-07-03T18:36:11","modified_gmt":"2022-07-03T17:36:11","slug":"granular-dance","status":"publish","type":"page","link":"https:\/\/wp.coventry.domains\/e2create\/granular-dance\/","title":{"rendered":"Granular Dance"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\">Summary<\/h2>\n\n\n\n<p>Granular Dance is a tool that can be trained with motion capture data and then used to generate new dance movement sequences.  Granular Dance  combines two different components: a deep learning model based on a recurrent adversarial autoencoder architecture, and a sequence blending mechanism that is inspired by granular and concatenative sound synthesis techniques. <\/p>\n\n\n\n<p>A detailed description of the project has been <a href=\"https:\/\/wp.coventry.domains\/e2create\/publications\/\" data-type=\"page\" data-id=\"22\">published<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Machine Learning Model<\/h2>\n\n\n\n<p>The model consists of an encoder, decoder, and discriminator. The autoencoder part operates on sequence of poses in which each pose is represented by joint orientations in the form of unit quaternions. The discriminator takes as input a latent encoding of a pose sequence and generates as output an estimate whether the encoding follows a Gaussian prior distribution. <\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/aee_lstm_v1-637x1024.png\" alt=\"\" class=\"wp-image-126\" width=\"243\" height=\"390\" srcset=\"https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/aee_lstm_v1-637x1024.png 637w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/aee_lstm_v1-187x300.png 187w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/aee_lstm_v1-768x1235.png 768w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/aee_lstm_v1-955x1536.png 955w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/aee_lstm_v1-1273x2048.png 1273w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/aee_lstm_v1-1568x2522.png 1568w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/aee_lstm_v1.png 1696w\" sizes=\"auto, (max-width: 243px) 100vw, 243px\" \/><figcaption>Adversarial Autoencoder Architecture<\/figcaption><\/figure><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Sequence Blending<\/h2>\n\n\n\n<p>The sequence blending mechanism is inspired by two methods from computer music that combine short sound fragments to generate longer sounds: Granular Synthesis and Concatenative Synthesis. For this project, the sequence blending mechanism is used to combine short pose sequences generated by the decoder into longer pose sequences. Similar to Granular Synthesis, a window function  is superimposed on the pose sequence which in this case blends the joint orientations of the overlapping pose sequences by spherical linear interpolation.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"512\" src=\"https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/Motion_Window_1-1024x512.png\" alt=\"\" class=\"wp-image-127\" srcset=\"https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/Motion_Window_1-1024x512.png 1024w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/Motion_Window_1-300x150.png 300w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/Motion_Window_1-768x384.png 768w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/Motion_Window_1-1536x768.png 1536w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/Motion_Window_1-1568x784.png 1568w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2021\/12\/Motion_Window_1.png 2000w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Pose Sequence Blending that Interpolates between Windowed Pose Sequences and a Base Pose. <\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\">Dataset<\/h2>\n\n\n\n<p>Training data for machine learning was acquired using a marker-less motion capture system. The recording was conducted at MotionBank, University for Applied Research Mainz. The recorded subjects were professional dancers specialized in contemporary dance. The recording used for training was taken from a single male dancer who was freely improvising to excerpts of music including experimental electronic music, free jazz, and contemporary classic.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Latent Space Navigation<\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2022\/07\/latent_space_plot_seq128_dim64-1024x683.png\" alt=\"\" class=\"wp-image-351\" srcset=\"https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2022\/07\/latent_space_plot_seq128_dim64-1024x683.png 1024w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2022\/07\/latent_space_plot_seq128_dim64-300x200.png 300w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2022\/07\/latent_space_plot_seq128_dim64-768x512.png 768w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2022\/07\/latent_space_plot_seq128_dim64-1536x1024.png 1536w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2022\/07\/latent_space_plot_seq128_dim64-1568x1045.png 1568w, https:\/\/wp.coventry.domains\/e2create\/wp-content\/uploads\/sites\/1833\/2022\/07\/latent_space_plot_seq128_dim64.png 1800w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>latent space representation of pose sequences<\/figcaption><\/figure>\n\n\n\n<p>A popular approach of using autoencoders for the purpose of movement generation is to navigate through latent space and collect latent vectors along the way which are then decoded and concatenated into a sequence. Several latent space navigation experiments have been conducted: random walk, trajectory offset following, trajectory interpolation. For these experiments, two types of machine learning models have been employed. A model named model128 works with sequences of 128 poses and an encoding dimension of 64. Another model named model8 works with sequences of 8 poses and an encoding dimension of 16.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-vimeo wp-block-embed-vimeo wp-embed-aspect-18-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"model128_seq14000_random_walk\" src=\"https:\/\/player.vimeo.com\/video\/508401476?h=6580f7f9a2&amp;dnt=1&amp;app_id=122963\" width=\"750\" height=\"375\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><figcaption>Random Walk (Model128)<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-vimeo wp-block-embed-vimeo wp-embed-aspect-18-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"model8_seq14000_random_walk\" src=\"https:\/\/player.vimeo.com\/video\/508445339?h=239b850b54&amp;dnt=1&amp;app_id=122963\" width=\"750\" height=\"375\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><figcaption>Random Walk (Model8)<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-vimeo wp-block-embed-vimeo wp-embed-aspect-18-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"model128_seq14000_offset_sequence_following\" src=\"https:\/\/player.vimeo.com\/video\/508402996?h=0d34509f99&amp;dnt=1&amp;app_id=122963\" width=\"750\" height=\"375\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><figcaption>Trajectory Offset Following (Model128)<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-vimeo wp-block-embed-vimeo wp-embed-aspect-18-9 wp-has-aspect-ratio\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"model8_seq14000_offset_sequence_following\" src=\"https:\/\/player.vimeo.com\/video\/508446279?h=11dfed4311&amp;dnt=1&amp;app_id=122963\" width=\"750\" height=\"375\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><figcaption>Trajectory Offset Following (Model8)<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-vimeo wp-block-embed-vimeo\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"model128_seq14000_seq4000_sequence_interpolation\" src=\"https:\/\/player.vimeo.com\/video\/508403507?h=ab8fcf5e1f&amp;dnt=1&amp;app_id=122963\" width=\"750\" height=\"250\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><figcaption>Trajectory Interpolation (Model128)<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-vimeo wp-block-embed-vimeo\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"model8_seq14000_seq4000_sequence_interpolation\" src=\"https:\/\/player.vimeo.com\/video\/508449003?h=e176d93cb4&amp;dnt=1&amp;app_id=122963\" width=\"750\" height=\"250\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><figcaption>Trajectory Interpolation (Model8)<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-vimeo wp-block-embed-vimeo\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"model128_seq14000_seq4000_sequence_extrapolation\" src=\"https:\/\/player.vimeo.com\/video\/508404186?h=9cdf6e2236&amp;dnt=1&amp;app_id=122963\" width=\"750\" height=\"250\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><figcaption>Trajectory Extrapolation (Model128)<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-embed is-type-video is-provider-vimeo wp-block-embed-vimeo\"><div class=\"wp-block-embed__wrapper\">\n<iframe loading=\"lazy\" title=\"model8_seq14000_seq4000_sequence_extrapolation\" src=\"https:\/\/player.vimeo.com\/video\/508450467?h=a6caa788dc&amp;dnt=1&amp;app_id=122963\" width=\"750\" height=\"250\" frameborder=\"0\" allow=\"autoplay; fullscreen; picture-in-picture\" allowfullscreen><\/iframe>\n<\/div><figcaption>Trajectory Extrapolation (Model8)<\/figcaption><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Summary Granular Dance is a tool that can be trained with motion capture data and then used to generate new dance movement sequences. Granular Dance combines two different components: a deep learning model based on a recurrent adversarial autoencoder architecture, and a sequence blending mechanism that is inspired by granular and concatenative sound synthesis techniques.&hellip; <a class=\"more-link\" href=\"https:\/\/wp.coventry.domains\/e2create\/granular-dance\/\">Continue reading <span class=\"screen-reader-text\">Granular Dance<\/span><\/a><\/p>\n","protected":false},"author":2154,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"_coblocks_attr":"","_coblocks_dimensions":"","_coblocks_responsive_height":"","_coblocks_accordion_ie_support":"","footnotes":""},"class_list":["post-81","page","type-page","status-publish","hentry","entry"],"_links":{"self":[{"href":"https:\/\/wp.coventry.domains\/e2create\/wp-json\/wp\/v2\/pages\/81","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wp.coventry.domains\/e2create\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/wp.coventry.domains\/e2create\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/wp.coventry.domains\/e2create\/wp-json\/wp\/v2\/users\/2154"}],"replies":[{"embeddable":true,"href":"https:\/\/wp.coventry.domains\/e2create\/wp-json\/wp\/v2\/comments?post=81"}],"version-history":[{"count":6,"href":"https:\/\/wp.coventry.domains\/e2create\/wp-json\/wp\/v2\/pages\/81\/revisions"}],"predecessor-version":[{"id":353,"href":"https:\/\/wp.coventry.domains\/e2create\/wp-json\/wp\/v2\/pages\/81\/revisions\/353"}],"wp:attachment":[{"href":"https:\/\/wp.coventry.domains\/e2create\/wp-json\/wp\/v2\/media?parent=81"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}