Banner of CGI Being as realistic as real life. The banner showcases the downturn graphics of Green Lantern, Sonic the Hedgehog and The Scorpion King
Design Movies

A look inside CGI being just as realistic as real life

The hunt for more realistic CGI animations. A brief run through of modern day CGI.

Most movies have used computer graphic interface (CGI) software to make an effect that was unachievable with puppeteers, modelling, makeup, prosthetic or miniature. Other times it was to save time to just render these effects than use practical props.

There have been a lot of advancements to computer graphics special effects. That is a contribution of software being more refined, hardware getting more capable to retain more memory and people specializing more into the fields of computer animation for film, video games and other types of developing technologies.

In the modern classic science fiction film Back to the Future had a scene that gave a fictional glimpse of what the future would look like in 2014. When Marty McFly was walking down Main Street in Hill Valley 2014 he saw a 20ft head towards him and thought that shark was going to eat him. Well, it didn’t, it was fake. The shark was for Jaws 19 in 3D. The whole movie would be 3D and the characters and settings would also have to be rendered in graphics software as well in the non-existent 3D animated movie.

A screenshot of Universal’s Back To The Future: Part Two (1985) with Marty McFly in front of an interactive advertisement of JAWS 19.

There was a movie that was released two years ago called Valerian and the City of Thousand Planets. This 2017 fantasy action movie is based on a French comic book series of the same name. This movie heavily uses computer graphics for everything to the point that it was the most expensive European and independent film ever. The majority of the characters were digitally produced from a blue screen studio. The movie really does create its own reality with technology, aliens and their own version of reality. For instance, in the marketplace scene, Valerian in running away from some bad merchants. It makes its own dimensional marketplace that people go shopping, runs through the walls and interact with objects and beings that don’t exist. You can see the extent of the visual effects.

Two screenshots from Valerian and the City of a Thousand Planets (2017).

What’s the difference between CGI and VFX?

Visual Effects (VFX): These are effects that can be achieved during the editing and post-production stage in film making. They are the manipulation and/or creation of effects from the live-action video that was filmed earlier. They are made to replace the practical effects that would have been costly, dangerous, physically impossible and/or tiresome. Video effects can be done with computer-generated imagery with easily available software like Adobe After Effects or Cinema 4D. This field of work requires an understanding of animation, video production and a high level of computer literacy. There are different types of visual effects that don’t require a computer like rotoscoping which is tracing on film to look animated.

A clip from Sh! The Octopus (1937) using colour shifting channels to revel a twist.

Computer Generated Imagery (CGI): Computer software is used to create imagery for media. The medium can be for print, film, gaming, mobile, advertisements or animation; mostly refer usage is for films, gaming and animation for 3D graphics. If you are in the field of VFX today, you would most likely be using CGI work somewhere within your day. Modern day computer-generated imagery takes over the work of stop motion models, real matte painted backgrounds and 2D frame-to-frame animations. There is a large focus on the creation of realistic 3D generated graphics. Common computer programs can create these effects could range from Blender, GIMP or Autodesk Maya.

A .gif of motion capture of the actor with the animation complete from the film Alita Battle Angel (2019).

Getting a digital face lift

In the Movies

Some movies that have a large budget have some scenes with the actor’s face de-aged or in some cases significantly older.

In most of the modern movies that have older actors in need of a younger face can alter the face symmetry to look younger. In most of the Marvel movies (Fox and Marvel Studios), have scenes that utilize the digital facelift. Movies like X-Men: Last Stand, Ant-Man, Guardians of the Galaxy Vol. 2, Iron Man 3, Pirates of the Caribbean 5, Fast and the Furious and the to be released movie The Gemini Man use this type of technology to rejuvenate their faces. Or the reverse in films like The Curious Case of Benjamin Button.

A screenshot of The Gemini Man (2019). An elite assassin becomes a target to another operative who can predict his every move only to figure out that it’s his clone. This is Will Smith in the lead role playing the elite assassin on the right and the counter operative with CGI help on the left.

They create this effect by finding multiple images of the main actor at the age they want to de-age to. The animators and coders would need to see the face at various angles and various expressions to create the face. They find an actor that looks similar to the main actor for the purpose to swap out his face with the digital face. They draw mapping dots on the younger actor’s face. The dots are around the mouth, apples of the cheeks, the tip of the nose, the jawline, the whole eyebrow area and forehead. This is to anchor where to place the digital mask on the actor’s face. Or to be used as tracking later smooth, blur, stretch and model the face within the frames make it look younger. Large teams and many hours are devoted to rendering one minute to the perfect of what is required for the scene.

The time to do both methods will take a long period of time to make. The majority of these types of special effects are done by digital studios like, Lola VFX, Digital Domain and Weta Digital. All of these studios are award winning for their spectacular special effects and groundbreaking in their adaption of photo editing in film. They would (most likely) use a combination of Autodesk Maya 3D, Houdini, Cinema 4D and Nuke to create their photorealistic images while using Shotgun to track all of the movement of the layers. This arrangement of software use is common among many special effects studios.

On Your Smart Phone

Most people now have access to similar technology with application software like FaceApp. It’s an app that can give some spooky results with template AI output data to make your face have makeup, really good Hollywood makeup, different hair colours and styles and switch genders. The output is completely out there but pretty accurate within the realm of what the photo is. It will be built upon what is in the photos the majority of the time, for instance, if the style selected was to add bangs then it looks like it first places generic hair on your forehead then adds small hairs to match any flyaways on top of your head then colour matches your hair. It also uses old video effects like a Vaseline screen effect which achieves more of a halation or glow in a photograph but it actually is a blur that covers up the area blending in the effect. It is generated by an in-cloud machine learning software that calculates and predicts what the face would look like within its neural networks. It uses typical solutions and common outcomes to help build new faces.

But this app is without some controversy. In 2013, a FaceApp update allowed people to render what their faces would look like if there were another race from their own. It was weird and distasteful within an application to do this type of guesstimate in the software. It was removed after the upset. Currently, an internet challenge that most of their users are uploading their faces rendered to be old with many people questioning how the application is creating the faces and the data being collected. It’s always best to read the terms of agreements before using new technology.

There are different types of artificial intelligence software that can have a similar effect, for example:

  • Google AI – Insert yourself into great works of art
  • Pikazo – Integrate yourself in or as fine art in real time
  • Lollicam – Make your face into a cinematic motion graphic in seconds
  • SnapChat – Edit you face with augmented reality animations in real time
  • Masquerade – Digitally wear a mask in photos and videos
  • FaceSwap – Swap out a face in a photo with your own mug
  • Reflect – Another face swapping software

Most of these applications use artificial neural networks to render their faces. Neural networks are a set of algorithms that detect and recognize patterns that loosely structure a [human] brain. The patterns the networks use are in numerical vectors while trying to translate more realistic time, text, sound and images. They use deep machine learning to create new computerized information. Neural networks are organized in layers that feed-forward in one direction. These type of networks work best when there is a high level of error that can be done hence a photo is okay but a bank account is problematic. Python would have been the coding language most of these might of used and customized. Each company organizes its own networks their own way that benefits their end results . For further reading of neural networks from Massachusetts Institute of Technology and University of Wisconsin-Madison.

The “Laws of Physics” is sometimes ignored

There is a difference between throwing a ball and imagining how a ball should be thrown. CGI at it’s the basic form is a garnish to a story that helps the plot work visually. Almost like a cake with some frosting. But some CGI works to ignore how characters and interact with a scene.

When most actors are in front of a green screen setup, they don’t have anyone to interact with. Some classically trained actors that have years of acting experience with an actor buddy or someone else on the same stage with them, are now by themselves doing lines meant in a conversation sense as a monologue. In the early days of strictly green screen effect movies, where most of the movie is animated in post-production, was difficult for some actors depending on the type of production being produced.

If the actor had to throw something that is not in their hand with CGI from left to right of the screen, the object would glide smoothly in the air and land perfectly. Maybe it would be rotating in a 360 degree motion and some motion blur to render the speed. The ball would not look realistic because if the actor really threw the ball from one point to another, the ball would blur a little bit but that depends on how hard the actor throw the ball. Also, the ball may rotate a bit but not a lot, this also depends on how the ball is thrown by the actor.

Physics is hard to render well on screen because of how gravity affects the objects around something either still or in motion, the weight, velocity, the conditions around the object and if the object bounces while landing can affect how to render one object. Most animators and computer artists are not physicists and just know the basic amount of physics. There’s a lot of hard work and long hours needed to create a realistic scene that uses reality well and just slightly bends it.

Some filters use animated simulations to create complex scenes like fire burning, glass shattering or a brick wall breaking apart. But these are just starting points to add in what is realistic for the scene. Several people would animate ten seconds of film review the clip if that sense of reality is there. In the film, especially animated film, there’s physics and cartoon physics that could affect the outcome of a scene. Physics is the study of matter, its motion and its behaviour through space and time. While cartoon physics suspends the ideas of physics for comedic effect. For example, if a building exploded in an action scene the actors would be flying across the parking lot away from the fiery mess. But with physic at play for real, the building would implode with smoke and debris. They may not fly back but they would have signs of injury and hearing loss. With movies, sometimes it’s okay to ex spell your notions of reality for a few minutes.

From unsettling to the uncanny valley

With so much work trying to aim for a human-like appearance many animations can look unsettling to not even passable. This has been often spoken about in recent times after the VSauce video of Why Things Are Creepy? was posted. The video brings up topics of ambiguity and the uncanny valley. Video games and 3D animations that gear towards a life-like version of humans on screen can fall victim to this interruption of weird. The most recent films that most can see to find the uncanny valley are Sonic The Hedgehog and Cats. The attempt to do photorealistic fur, human movements and human features in one character can be seen as creepy due to its humanistic qualities. Sonic having real human teeth instead of a more cartoony look like the original video game character design. Or the cats in Cats having human bodies with cat fur and cat-infused elements to a human body. It could be because of it being life like and I don’t know what to do with the information as if it is too alien for us to comprehend or that it is repulsive that our primal human instincts to take over and run. The overreaction could be exaggerated but if these films were 2D animated there wouldn’t be much to discuss.

The character in both images are of Bombalurina singing a ballad.
Top: A screenshot of the movie Cats (2019) from Universal Pictures.
Below: A screenshot of the play Cats (1998)

This is happening more often in animated films for various reasons. One of those reasons might be how long an animator has to be able to render a motion properly with all the faults that reply upon physics and multiple angles that make a scene more realistic. We may be doing too much in a medium that is still not ready to animated someone to the point that we can’t tell that it’s not human. Furthermore, some might never have enough rendering time to have the feeling the work is finished and ready to post.

Chart of the Uncanny Valley.
A graph of the Uncanny Valley based on the examples from the 1970’s essay.

The theory is based on how robots could look eerie if they start to look more human then not when they go to a human. The essay was originally written in a Japanese 1970’s magazine Energy by Masahiro Mori. The essay is about how robots are mostly designed for functionality and to redesign a robot to look more human would make them look unsettling unless it’s done perfectly. It’s always going to be impossible to design something to look and move perfectly in film, video games or robotics as a human being. It examines that even movement can throw everything off from almost human to I don’t know what. For instance, the essay gives an example of a person smiling is at first glance is a facial deformity, a happy smile moves quick but if it moved slowly it would look creepy. But a better example from me would be the basis of body horror when the person is transforming into the monster. That one moment of not knowing what the creature is going to be can be creepy but this is an extreme example.

Eerie animations that took a chance at photorealism isn’t a new problem in 3D animated film. Three big examples are Final Fantasy: The Spirits Within, Polar Express, Beowulf and TRON: Legacy. All three films have photorealistic characters that ended up being distracting. The animations can look phony, weird and “human don’t move like that”.

What is real?

There is a dark side to all of this type of advanced technology, like when you can’t tell if it’s real. The online software renderings from face swapping software are concerning to a point that people’s identities and personal safety in jeopardy. It could generate future problems of taking important information seriously and if it wasn’t done out of malice.

Jordan Peele did a PSA doing an impression of 44th President Barack Obama with the face swapping software showcasing the dangers of not knowing if the person is real or not. In this video, you can tell that the software is trying to keep up with the movement of the mouth by the blurring and slightly shifted alignment with the face. If the software becomes more exact it might become seamless and perfectly human.

Video uploaded by BuzzFeed showcasing the neural networks at play.

This was always debated in advertising with this type of technology since the Dirt Devil commercial with Fred Astaire in the 1990s. In fact, multiple stars had their alikeness used for advertising manufactured products in the early 1990s like Marilyn Monroe in a Chanel no. 5 commercial 35 years after her death and Audrey Hepburn in a Gap commercial. There were multiple weirdly edited Paula Abdul Diet Coke commercial with Abdul singing about cola and dancing to match clips, inserts and facial expressions from Cary Grant, Groucho Marx and Gene Kelly from past films produced. Also, Elton John did a similar commercial with him playing the piano in front of party guests Humphrey Bogart, James Cagney and Louis Armstrong digitally re-edited than cut and paste. The trend caused a lot of people to feel upset than applaud in their technological achievements.

Nowadays, it sometimes occurs in media but it’s in full length feature picture films. In the last two Star Wars films, Peter Cushing and Carrie Fisher were in their roles as Grand Moff Tarkin and Princess Leia with CGI to include them in the story. Sometimes this technology is used to honour the dead or finish the work from that person.

With new technology, there’s going to be a lot of good and a lot of bad. That’s a part of getting around the new curves of using something just invented. Hopefully, we see more character development in the future of CGI than the search for realistic animated characters in modern movies. Special effects don’t make a good movie, good storytelling does.


Resources:

EDUCBA – VFX vs CGI

Tech Crunch – FaceApp uses neural networks for photorealistic selfie tweaks

Skymind – A Beginner’s Guide to Neural Networks and Deep Learning

IEEE Spectrum – The Uncanny Valley: The Original Essay by Masahiro Mori

%d bloggers like this: