Site icon

VFXRIO Live happening online from 21March

VFXRIO Live is launching its second digital edition on 21March with a keynote presentation by Pinscreen founder and CEO Hao Li regarded as the best deepfake artist in the world. His talk will focus on digital humans, artificial intelligence and the future of technology.  Hao Li is known for his seminal work in capturing facial performance in real time, scanning hair and dynamically capturing the entire body. 

His work in facial animation was the basis of Animoji on Apple’s iPhone X. Hao Li worked at Weta Digital, collaborated on the creation of Paul Walker’s digital reenactment technology in the film Fast and Furious 7 and was a research leader at Industrial Light & Magic / Lucasfilm and Weta.  Often seen in the media worldwide, in Davos, Dubai and even in Hollywood, Hao Li is associate professor of computer science at the University of Southern California, as well as director of the Vision and Graphics Lab at the USC Institute for Creative Technologies. 

“We are living under lock down and can’t have physical conferences, meetings and all flights are cancelled. I believe meeting, conferences and presentations will remain virtual even after the pandemic. Devices such as Microsoft’s Hololens will change the way we collaborate, interact and communicate. You have this idea of people being able to teleport themselves into a common space.  The idea of digital humans goes beyond the idea of digital humans in videogames or VFX:  We are looking at a future where virtual humans can become the centre of our everyday lives” says Li. 

VFXRIO director Matteo Moriconi says: “We believe Hao Li’s research is seminal and needs to be communicated. DeepFake technology is here to stay and will be changing the way we see the world and pushing the boundaries in terms of creativity, communication and ethics. We are proud to host his keynote at VFXRIO LIVE”.   

Deepfakes uses a form of artificial intelligence called deep learning to make images of fake events.   The main machine learning methods used to create deepfakes are based on deep learning and involve training generative neural network architectures. Digital humans, created with similar technology, are meant to see and listen to users to understand the meaning behind the words. They can then use their own tone of voice and body language to create lifelike human conversations.

 

Exit mobile version