GMU:Procedural Cut/Keno: Difference between revisions

From Medien Wiki
No edit summary
 
(7 intermediate revisions by one other user not shown)
Line 1: Line 1:
[File:dog_01.gif]
== News Anchors of the Future ==
 
{{#ev:vimeo|404982095}}
 
=== Idea ===
In the 1980s, British [https://www.channel4.com Channel 4] tried to imagine the TV of the future. They created Max Headroom as the host of a music show. He was supposed to be an AI character although played by an actual human actor, [[wikipedia:Matt Frewer|Matt Frewer]]. After shooting, the fragments were cut in a way that would be called “[https://www.pbs.org/video/off-book-art-glitch glitchy]” nowadays. This imperfection is a very important part of the character, serving as a signifier for an artificial humanoid character.
 
{{#ev:youtube|vS17G1MXzLk}}
 
When so called neural networks came up, a discussion started about which kinds of labour can be done by so called artificial intelligence. Some people are convinced even artists and musicians will be replaced by computer systems.
[[wikipedia:Hatsune Miku|Hatsune Miku]] was originally the name voicebank which can be used with Yamaha’s Vocaloid software. From 2010 on, albums for Hatsune Miku have been produced. In 2012 “she” gave her first concert as a holograph, gaining huge popularity.
 
{{#ev:youtube|YSyWtESoeOc}}
 
Max Headroom and Hatsune Miku are popular examples for avatars, fictional characters with a virtual representation created through CGI and video cutting techniques.
With this history in mind, how would avatars for TV news look like? Having trained a machine learning model with news host images, the software can generate new news hosts. Most approaches aim at generating a photorealistic representation of a human. Referring to Max Headroom, it is more interesting to create avatars looking human-like, having some flaws, and thus, to include technical characteristics in the design process.
 
=== Production ===
 
First tests were carried out with the [https://github.com/NVlabs/few-shot-vid2vid few-show-vid2vid] framework by NVIDIA. At this time, the idea was different: The news host were supposed to have dog faces. Due to technical problems this approach was abandoned for now.
 
[[File:dog_01.gif]]
 
[https://runwayml.com/ RunwayML] is a SaaS company providing shared machine learning models. For this experiment, the StyleGAN (to be more precise [https://github.com/NVlabs/stylegan2 StyleGAN2]) framework by NVIDIA was used, which gained popularity due to its ability of generating almost photorealistic faces. The model was trained with the [http://niessnerlab.org/projects/roessler2018faceforensics.html FaceForensics dataset] by Technische Universität München, consisting of videos of news hosts. Using [http://ffmpeg.org/ ffmpeg] a part of these video files were converted to image sequences and fed into RunwayML’s training system. After completing the training, a video was generated walking through different parameters for image generation, creating this fluid transition from one host to another.
This training was completed after 2000 steps. The model can be expanded and the quality improved by continuing the training.
 
[[File:runway.png]]

Latest revision as of 09:42, 28 April 2020

News Anchors of the Future

Idea

In the 1980s, British Channel 4 tried to imagine the TV of the future. They created Max Headroom as the host of a music show. He was supposed to be an AI character although played by an actual human actor, Matt Frewer. After shooting, the fragments were cut in a way that would be called “glitchy” nowadays. This imperfection is a very important part of the character, serving as a signifier for an artificial humanoid character.

When so called neural networks came up, a discussion started about which kinds of labour can be done by so called artificial intelligence. Some people are convinced even artists and musicians will be replaced by computer systems. Hatsune Miku was originally the name voicebank which can be used with Yamaha’s Vocaloid software. From 2010 on, albums for Hatsune Miku have been produced. In 2012 “she” gave her first concert as a holograph, gaining huge popularity.

Max Headroom and Hatsune Miku are popular examples for avatars, fictional characters with a virtual representation created through CGI and video cutting techniques. With this history in mind, how would avatars for TV news look like? Having trained a machine learning model with news host images, the software can generate new news hosts. Most approaches aim at generating a photorealistic representation of a human. Referring to Max Headroom, it is more interesting to create avatars looking human-like, having some flaws, and thus, to include technical characteristics in the design process.

Production

First tests were carried out with the few-show-vid2vid framework by NVIDIA. At this time, the idea was different: The news host were supposed to have dog faces. Due to technical problems this approach was abandoned for now.

Dog 01.gif

RunwayML is a SaaS company providing shared machine learning models. For this experiment, the StyleGAN (to be more precise StyleGAN2) framework by NVIDIA was used, which gained popularity due to its ability of generating almost photorealistic faces. The model was trained with the FaceForensics dataset by Technische Universität München, consisting of videos of news hosts. Using ffmpeg a part of these video files were converted to image sequences and fed into RunwayML’s training system. After completing the training, a video was generated walking through different parameters for image generation, creating this fluid transition from one host to another. This training was completed after 2000 steps. The model can be expanded and the quality improved by continuing the training.

Runway.png