Most movies have used computer graphic interface (CGI) software to make an effect that was unachievable with puppeteers, modelling, makeup, prosthetic or miniature. Other times it was to save time to just render these effects than use practical props.
There have been a lot of advancements to computer graphics special effects. That is a contribution of software being more refined, hardware getting more capable to retain more memory and people specializing more into the fields of computer animation for film, video games and other types of developing technologies.
In the modern classic science fiction film Back to the Future had a scene that gave a fictional glimpse of what the future would look like in 2014. When Marty McFly was walking down Main Street in Hill Valley 2014 he saw a 20ft head towards him and thought that shark was going to eat him. Well, it didn’t, it was fake. The shark was for Jaws 19 in 3D. The whole movie would be 3D and the characters and settings would also have to be rendered in graphics software as well in the non-existent 3D animated movie.
There was a movie that was released two years ago called Valerian and the City of Thousand Planets. This 2017 fantasy action movie is based on a French comic book series of the same name. This movie heavily uses computer graphics for everything to the point that it was the most expensive European and independent film ever. The majority of the characters were digitally produced from a blue screen studio. The movie really does create its own reality with technology, aliens and their own version of reality. For instance, in the marketplace scene, Valerian in running away from some bad merchants. It makes its own dimensional marketplace that people go shopping, runs through the walls and interact with objects and beings that don’t exist. You can see the extent of the visual effects.
What’s the difference between CGI and VFX?
Visual Effects (VFX): These are effects that can be achieved during the editing and post-production stage in film making. They are the manipulation and/or creation of effects from the live-action video that was filmed earlier. They are made to replace the practical effects that would have been costly, dangerous, physically impossible and/or tiresome. Video effects can be done with computer-generated imagery with easily available software like Adobe After Effects or Cinema 4D. This field of work requires an understanding of animation, video production and a high level of computer literacy. There are different types of visual effects that don’t require a computer like rotoscoping which is tracing on film to look animated.
Computer Generated Imagery (CGI): Computer software is used to create imagery for media. The medium can be for print, film, gaming, mobile, advertisements or animation; mostly refer usage is for films, gaming and animation for 3D graphics. If you are in the field of VFX today, you would most likely be using CGI work somewhere within your day. Modern day computer-generated imagery takes over the work of stop motion models, real matte painted backgrounds and 2D frame-to-frame animations. There is a large focus on the creation of realistic 3D generated graphics. Common computer programs can create these effects could range from Blender, GIMP or Autodesk Maya.
Getting a digital face lift
In the Movies
Some movies that have a large budget have some scenes with the actor’s face de-aged or in some cases significantly older.
In most of the modern movies that have older actors in need of a younger face can alter the face symmetry to look younger. In most of the Marvel movies (Fox and Marvel Studios), have scenes that utilize the digital facelift. Movies like X-Men: Last Stand, Ant-Man, Guardians of the Galaxy Vol. 2, Iron Man 3, Pirates of the Caribbean 5, Fast and the Furious and the recently released movie The Gemini Man use this type of technology to rejuvenate their faces.
They create this effect by finding multiple images of the main actor at the age they want to de-age to. The animators and coders would need to see the face at various angles and various expressions to create the face. They find an actor that looks similar to the main actor for the purpose to swap out his face with the digital face. They draw mapping dots on the younger actor’s face. The dots are around the mouth, apples of the cheeks, the tip of the nose, the jawline, the whole eyebrow area and forehead. This is to anchor where to place the digital mask on the actor’s face. Or to be used as tracking later smooth, blur, stretch and model the face within the frames make it look younger. Large teams and many hours are devoted to rendering one minute to the perfect of what is required for the scene.
The time to do both methods will take a long period of time to make. The majority of these types of special effects are done by digital studios like, Lola VFX, Digital Domain and Weta Digital. All of these studios are award winning for their spectacular special effects and groundbreaking in their adaption of photo editing in film. They (would most likely) use a combination of Autodesk Maya 3D, Houdini, Cinema 4D and Nuke to create their photorealistic images while using Shotgun to track all of the movement of the layers. This arrangement of software use is common among many special effects studios.
On Your Smart Phone
Most people now have access to similar technology with application software like FaceApp. It’s a fun app that can give some spooky results with template AI output data to make your face have makeup, really good Hollywood makeup, different hair colours and styles and switch genders. The output is completely out there but pretty accurate within the realm of what the photo is. It will be built upon what is in the photos the majority of the time, for instance, if the style selected is added bangs then it looks like it first places generic hair on your forehead then adds small hairs to match any flyaways on top of your head then colour matches your hair. It also uses old video effects like a Vaseline screen effect which achieves more of a halation or glow in a photograph. It is generated by an in-cloud machine learning software that calculates and predicts what the face would look like within its neural networks. It uses typical solutions and common outcomes to help build new faces.
But this app is without some controversy. In 2013, a FaceApp update allowed people to render what their faces would look like if there were another race from their own. It was weird and distasteful within an application to do this type of guesstimate in the software. It was removed after the upset. Currently, an internet challenge that most of their users are uploading their faces rendered to be old with many people questioning how the application is creating the faces.
There are different types of artificial intelligence software that can have a similar effect, for example:
- Google AI – Insert yourself into great works of art
- Pikazo – Integrate yourself in or as fine art in real time
- Lollicam – Make your face into a cinematic motion graphic in seconds
- SnapChat – Edit you face with augmented reality animations in real time
- Masquerade – Digitally wear a mask in photos and videos
- FaceSwap – Swap out a face in a photo with your own mug
- Reflect – Another face swapping software
Most of these applications use artificial neural networks to render their faces. Neural networks are a set of algorithms that detect and recognize patterns that loosely structure a [human] brain. The patterns the networks use are in numerical vectors while trying to translate more realistic time, text, sound and images. They use deep machine learning to create new computerized information. Neural networks are organized in layers that feed-forward in one direction. These type of networks work best when there is a high level of error that can be done hence a photo is okay but a bank account is problematic. Python would have been the coding language most of these might of used and customized. Each company organizes its own networks their own way that benefits their end results . For further reading of neural networks from Massachusetts Institute of Technology and University of Wisconsin-Madison.
The Dark Side To This Technology
This will be from unsettling to the uncanny valley
With so much work trying to aim for a human-like appearance many animations can look unsettling to not even passable. This has been often spoken about in recent times after the VSauce video of Why Things Are Creepy? was posted. The video brings up topics of ambiguity and the uncanny valley. Video games and 3D animations that gear towards a life-like version of humans on screen can fall victim to this interruption of weird. The most recent films that most can see to find the uncanny valley are Sonic The Hedgehog and Cats. The attempt to do photorealistic fur, human movements and human features in one character to be creepy. Sonic having real human teeth instead of a more cartoony look like the original video game character design. Or the cats in Cats having human bodies with cat fur and cat-infused elements to a human body. It could be because of it being to live like don’t know what to do with the information like it is too alien for us to comprehend or that it is repulsive that our primal human instincts to take over and run. The overreaction could be exaggerated but if these films were 2D animated there wouldn’t be much to discuss.
This is happening more often in animated films for various reasons. One of those reasons might be how long an animator has to be able to render a motion properly with all the faults that reply upon physics and multiple angles that make a scene more realistic. We may be doing too much in a medium that is still not ready to animated someone to the point that we can’t tell that it’s not human. Furthermore, some might never have enough rendering time to have the feeling the work is finished and ready to post.
The theory is based on how robots could look eerie if they start to look more human then not when they go to a human. The essay was originally written in a Japanese 1970’s magazine Energy by Masahiro Mori. The essay is about how robots are mostly designed for functionality and to redesign a robot to look more human would make them look unsettling unless it’s done perfectly. It’s always going to be impossible to design something to look and move perfectly in film, video games or robotics as a human being. It examines that even movement can throw everything off from almost human to I don’t know what. For instance, the essay gives an example of a person smiling is at first glance is a facial deformity, a happy smile moves quick but if it moved slowly it would look creepy. But a better example from me would be the basis of body horror when the person is transforming into the monster. That one moment of not knowing what the creature is going to be can be creepy but this is an extreme example.
Eerie animations that took a chance at photorealism isn’t a new problem in 3D animated film. Three big examples are Final Fantasy: The Spirits Within, Polar Express, Beowulf and TRON: Legacy. All three films have photorealistic characters that ended up being distracting. The animations can look phony, weird and “human don’t move like that”.
What is real?
There is a dark side to all of this type of advanced technology, like when you can’t tell if it’s real. The online software renderings from face swapping software are concerning to a point that people’s identities and personal safety in jeopardy. It could generate future problems of taking important information seriously and if it wasn’t done out of malice.
Jordan Peele did a PSA doing an impression of 44th President Barack Obama with the face swapping software showcasing the dangers of not knowing if the person is real or not. In this video, you can tell that the software is trying to keep up with the movement of the mouth by the blurring and slightly shifted alignment with the face. If the software becomes more exact it might become seamless and perfectly human.
With new technology, there’s going to be a lot of good and a lot of bad. That’s a part of getting around the new curves of using something just invented. Hopefully, we see more character development in the future of CGI than the search for realistic animated characters in modern movies. Special effects don’t make a good movie, good storytelling does.