top of page

Investigative Study

Week 01: Introduction to the module / Deepfake

The main goal of this module is to develop an essay in which we examine a specific topic in VFX. The purpose of this essay, driven by a topic we choose, is to motivate us, as VFX artists, to conceptualise and theorise our processes and outputs more deeply. In the first session of the module, we discussed Deepfake. 

Deepfake

Deepfake is a technology that uses machine learning techniques and algorithms, as well as artificial intelligence (aka AI), to produce synthetic media (mainly video and image). Deepfake substitutes one person’s resemblance with a different person’s face and match up face and body to create an illusion of reality. For a long time, however, VFX artists have been putting actors’ faces on stunt doubles’ bodies, but deepfake, on the other hand, is relatively new. In deepfake, faces are replaced more easily by AI-powered software instead of meticulously by VFX artists. Deepfake has a wide range of applications among Internet users – mainly for entertainment and fun, but sometimes for blackmailing and pornography uses. However, in VFX and cinema, various quality difficulties and shot-specific needs have proven deep fakes unsuitable to be used for final scenes. Nonetheless, deepfake technology is advancing enormously as machine learning algorithms are evolving. Using deep fakes rather than fully CG actors allows filmmakers to use AI-based face replacement to transform their actors into any character they want more easily and quickly (and, of course, inexpensively). As a result, filmmakers can utilise deepfake as “digital makeup” to age or de-age actors, for instance. Moreover, AI-generated synthetic media in VFX may be able to bridge the uncanny valley effect compared to computer-generated footage, as the technology advances. VFX companies and artists have not yet fully embraced Deepfakes, but they may adopt similar AI technologies to create a high level of realism as the technology behind deepfakes improves.

 

References

Aldredge, J. (2020) Is Deepfake Technology the Future of the Film Industry? Available at: https://www.premiumbeat.com/blog/deepfake-technology-future-of-film-industry (Accessed: 19/11/2021).

 

Bridging the uncanny valley: what it really takes to make a deepfake (2019) Available at: https://www.foundry.com/insights/film-tv/digital-humans (Accessed: 19/11/2021).

​

Failes, I. (2020) Deep Fakes: Part 1 – A Creative Perspective. Available at: https://www.vfxvoice.com/deep-fakes-part-1-a-creative-perspective (Accessed: 19/11/2021).

 

Jaiman, A. (2020) Positive Use Cases of Synthetic Media (aka Deepfakes) Available at: https://towardsdatascience.com/positive-use-cases-of-deepfakes-49f510056387 (Accessed: 19/11/2021).

Deepfake examples

Boris Johnson and Donald Trump deepfake video by Framestore for the Cannes LIONS Live virtual creativity festival. 

Deepfake video of the Ukrainian president telling his people to lay down their weapons.

Deepfake video of former US president Barack Obama. 

Unreal: The VFX Revolution - A Long, Long Time Ago...
Front_projection_effect.jpg

Front projection effect: the image is projected onto a one-way mirror and then is reflected onto a highly reflective screen.

In the first episode of the podcast "Unreal: The VFX Revolution", Paul Franklin, an award-winning visual effects supervisor, explains how visual effects evolved over the years and changed cinema. Giving us a brief history of visual effects before the digital era of the 90s, he draws upon the experience and first-hand accounts provided by influential pioneers such as Robert Blalack, John Dykstra, Richard Edlund, Dennis Muren and Doug Trumbull. Franklin suggests that Star Wars, along with Close Encounters of the Third Kind, "Kicked off a renaissance in visual effects." (Franklin, 2021) On the one hand, he describes visual effects as a tool being used by creative filmmakers such as Stanley Kubrick, George Lucas, and Steven Spielberg to tell their stories more efficiently than ever. On the other hand, Franklin suggests that George Mellie's film "A Trip to the Moon" was the first example of visual effects used in cinema as a tremendous narrative-telling tool. To compare the early stage of visual effects with today, he states, "What was once an exclusive domain of pioneers like George Lucas and Steven Spielberg has become a standard part of filmmakers' toolbox" thanks to the technological developments and computer advances (Franklin, 2021). Paul Franklin describes the film "2001: A Space Odyssey" as revolutionary and cutting-edge in terms of visual effects innovations and techniques. In "2001: A Space Odyssey", Stanley Kubrick pioneered the use of front projection to produce the backdrops in the Dawn of Man sequence and the scene when astronauts walk on the Moon. Front projection is an in-camera visual effects technique to composite foreground performance with pre-filmed background footage. "The entire process of making 2001 was a very technical and photographic quality control project", Douglas Trumbull, who worked on 2001 and Steven Spielberg's Close Encounters, states (Franklin, 2021). Franklin also explains how the Star Wars film by George Lucas had a significant impact at the time, and visual effects played a big part in the film's achievements. The phenomenal box office success of Star Wars made filmmakers invest in visual effects and helped the entire industry face a technical and creative revolution. However, Franklin claims, "Visual effects work is always slow going. Even today, with all our digital tricks, every image is painstakingly crafted. It's a labour-intensive, time-consuming process that often doesn't produce a result until the eleventh hour – It can lead to tension." (Franklin, 2021) Finally, anything seems possible in cinema thanks to visual effects, from outlandish creatures to distant worlds, especially today, when digital processes and computer technologies have mostly replaced photochemical effects.

​

Reference
Franklin, P. (2021) Unreal: The VFX Revolution - A Long, Long Time Ago... [Podcast]. 06 July. Available at: https://www.bbc.co.uk/sounds/play/m000xltg (Accessed: 08/10/2022).
 

Front Screen Projection (Youtube video)
The Problem with De-Aging and the Irishman - Video Essay (Youtube)
Week 02: Developing ideas, case studies and project examples

This week, we looked at case studies, tried to come up with ideas, and analysed several previous assignments in preparation for the next investigative research project. Then, using an essay from the previous year, we read it and answered the following questions. 

What is the research question? Outline the aims and objectives.

  • Title: Digital Compositing and the Creation of Seamless Visual Effects

  • Focused on seamless effects, not spectacular ones

  • Definition of compositing

  • Defining "digital" compositing

  • Making a comparison between traditional compositing vs digital compositing

  • Identifying the place of compositing in the broader VFX pipeline

  • Determining what a seamless effect is

  • Comparing seamless effects with other types of visual effects

  • Using findings from practical research

 

Can you describe the method used for the practical research?

  • The student used a background image to create two different composite images, one in the style of an invisible effect and the other as a spectacular effect. The software Foundry NukeX was used to create the composites. Some additional software was used to texture and render a 3D asset for the spectacular effect example. Texturing was done with Adobe Substance 3D Painter, rendering was done with Autodesk Maya using the Arnold render engine.

  • ​

What were the findings of the practical and/or theoretical research?

  • The main finding discovered during the research was that after the design and composition of the image, the most important thing was the seamless combination of the additional elements with the background image.

​

What was the student's argument? 

  • The student claims that spectacular effects could also be considered seamless effects, as both the invisible effect example and spectacular effect example have to appear realistic, and any added elements have to blend seamlessly, otherwise they would fail as composite images.

Picasso Draws With Light

Pablo Picasso was introduced to Gjon Mili, a photographer working for LIFE magazine at the time, in 1949. Mili experimented with long-exposure photos using two separate cameras for side and frontal perspectives while Picasso created a series of projected light drawings with dramatic motions in space and time. Click here for more information. 

150316-pablo-picasso-03-807x1024.jpg
150316-pablo-picasso-07-799x1024.jpg
150316-pablo-picasso-05-1024x826.jpg
150316-pablo-picasso-01-817x1024.jpg
150316-pablo-picasso-12-789x1024.jpg
Eadweard Muybridge

English photographer Eadweard Muybridge is well-known today for his innovative motion photography experiments, which ultimately sparked the invention of cinema. Watch the short Youtube video below. 

Brainstorming my research topics - Miro.com: mind maps 
Screenshot (1314).png
Week 03: Designing a question and writing a proposal

We looked at how to design a good essay question and write a proposal draft this week. 

Our proposal should contain the following:

  • A clear, focused title or research question

  • A list of searchable Keywords

  • An Introduction to the Investigative Study

  • Include the aims and objectives

  • A Methodology or list of five key sources (References) – these should be annotated

  • Any important images as figures

-

Word count: 500 - 600

Screenshot 2022-10-23 at 19.57.57.png
My investigative study research title (initial thoughts)
  • Photorealism in compositing with Nuke

  • Techniques for photorealism in compositing with Nuke

  • What is photorealism, and how can it be achieved in compositing with Nuke?

References and resources

I have looked for different resources to help me research the topic I have picked. Below are a few resources I have found. 

Screenshot 2022-10-23 at 20.19.23.png
Screenshot (1312).png
Screenshot 2022-10-23 at 20.31.24.png
Screenshot 2022-10-23 at 20.31.30.png
Week 04: Methodologies and Literature Reviews
Screenshot (1311).png
Proposal

What are the camera lens artefacts, and how can they help us achieve photorealism in compositing with Nuke?

​

In my investigative study research, I will examine the camera lens artefacts, the reason why they are helpful for photorealism, and the techniques and workflows to create them with Foundry Nuke and achieve photorealism in compositing visual effects. I have picked this topic because I have noticed that camera lens artefacts play a significant role in achieving photorealism in visual effects, especially when we have computer-generated imagery. With or without CGI, a composite should be believable so that viewers believe in the narrative and the world created by the film. As a VFX student and a photography enthusiast, I have always thought about the profound relationship between VFX, photography and lens and the way they interact with each other.

​

In photography, reality is recorded by a camera lens; therefore, factors such as light, depth of field, shutter speed, point of view and perspective, framing, colours, etc., have an impact on the captured image. However, to achieve photorealism in visual effects, artists use 3D software to simulate the operation and impact of lights and cameras. This results in a highly lifelike, natural, and accurate representation of reality. As Barbara Flueckiger (2015, pp. 78-98) claims, visual effects need to artificially simulate some digital artefacts of film to accomplish and perfect a photorealistic look. VFX artists can apply a variety of effects to mimic a film appearance and heighten the realistic impression, including noise and grain, depth of field, lens distortion and chromatic aberration, lens flare, vignetting, bokeh, and motion blur. In my essay, I will investigate these camera lens artefacts and show how they contribute to photorealism in compositing visual effects.

​

As part of my practical research, I will try to create these camera lens artefacts with Foundry Nuke to enhance the quality of photorealism for my shot in the compositing stage of the VFX pipeline. I will recreate a shot from a film, using the camera lens artefacts and depicting how they can improve the image. I will investigate different workflows to enhance the photoreal quality of my shot by way of replicating lens artefacts and applying them to my composite. The original photo, which does not have camera lens artefacts, and my improved composite will be then put side by side to demonstrate the difference those effects create.

​

Keywords:

Camera Lens Artefacts, Foundry Nuke, Digital Compositing, Photorealism, Exposure Triangle, Tilt Shift, Motion Blur, Depth of Field

​

References:

  • Bolter, J.D. and Grusin, R. (2000) Remediation: understanding new media. Cambridge, Massachusetts: The MIT Press.

  • Brinkman, R. (2008) The art and science of digital compositing. 2nd ed. USA: Morgan Kaufmann Publishers.

  • Flueckiger, B. (2015) Special effects: new histories/theories/contexts. Edited by D. North et al. London: Bloomsbury, pp. 78-98.

  • Mitchell , W.J. (1994) The Reconfigured Eye: Visual Truth in the Post-Photographic Era. Cambridge, Massachusetts: The MIT Press.

  • Prince, S. (2010) 'Through the Looking Glass: Philosophical Toys and Digital Visual Effects', Projections, 4(2), pp. 19-40.

  • Wright, S. (2010) Digital compositing for film and video. USA: Focal Press.

Research for the practical part of my investigative study

This tutorial discusses some of Nuke's most important workflows and techniques in compositing. It also talks about creating some of the lens artefacts, such as grain, depth-of-field, lens flares, chromatic aberration, etc. 

This youtube video gives an overview of common artifacts/characteristics of camera lenses.

This one focuses on these camera lens artefacts: chromatic aberration, focus/defocus and the bokeh effect. 

Intro to research methods and methodologies
Writing The Methodology For Your Thesis Or Paper
My Investigative Study Essay

What are camera lens artefacts, and how can they help us achieve photorealism in compositing with Nuke?


Introduction
While studying the Visual Effects programme at the University of West London, I found myself intrigued by compositing. This fascination drove me to want to expand my knowledge by looking into a question about this field. In addition, as a photography enthusiast, I have always considered the fundamental connection between VFX, photography, and lenses and how they interconnect. This essay will focus on camera lens artefacts, why they can be important for photorealism, how to achieve them in Foundry Nuke, and how to use them to produce photorealistic visual effects in the compositing stage. I chose this topic because camera lens artefacts play a crucial role in attaining photorealism in visual effects, whether in creating 3D assets or through the compositing process. 


What is Photorealism?
In the 1960s, Pop art gave rise to the photorealist movement. Using photography as the primary source of inspiration, artists meticulously recreated photographs in their paintings. Photorealism could also be seen as a reaction to Abstract Expressionism and Minimalism. In this sense, photorealism could be characterised as the restoration of painting true-to-life scenes, offering the most realistic representation of reality as possible, relying on realism at a time when the art world was inundated with abstraction. Some members of the movement were Ralph Goings, Richard Estes, Audrey Flack, and Charles Bell. (Bolter and Grusin, 2000) Their paintings were incredibly illusionistic and referenced the replicated photos rather than nature. (‘Photo-realism’, 2019) This movement also included some American sculptors such as Duane Hanson and John De Andrea. Duane Hanson was well renowned for creating life-size, lifelike human sculptures. He used a variety of materials, including polyester resin, fibreglass, Bondo, and bronze, to cast sculptures based on human models.

 

Estes_The-Plaza_1991_OC_36x66.jpg

Figure 1: ‘The Plaza’ (1991), by Richard Estes.

Often, photorealists utilised an airbrush to mimic the look of a photograph printed on glossy paper after projecting a photo onto a canvas. Estes asserted that the photograph was principally responsible for the artwork’s concept and that the painting was only a means of completing it. (‘Photo-realism’, 2019) He created the appearance of photography in his paintings in a way that a typical viewer may not be able to tell his paintings from photographs.

 

Photorealism in CGI and Digital Compositing

The development of computer-generated imagery and the process of digital compositing are two distinct but related areas where photorealism in visual effects may be studied. However, the primary purpose of incorporating analogue artefacts into CGI and compositing is to adapt CGI and the composites to the viewer’s expectations by simulating a film look and achieving photorealism.

​

The term ‘photorealism’ is also frequently used by those working in the digital animation industry. With photorealist animation, images must be visually indistinguishable from how a photograph of them would appear if they had the characteristics that the animation images suggest they should have. (Gaut, 2010)

 

It is worth looking at the development of the character of Kong in film history to examine the evolution of photorealism in CGI. King Kong has advanced visual effects capabilities ever since he made his first appearance on the big screen. In fact, the development of Hollywood visual effects can be analysed in the character of Kong.

image-w1280.jpg

Figure 2: a shot from King Kong (2005) directed by Peter Jackson.

In Peter Jackson’s 2005 remake of King Kong, motion capture (Mo-cap) was extensively used to add realism to Kong’s 3D model performance. Andy Serkis acted out Kong’s movements while wearing a suit that fed the information into a computer. Innovative fur technology developed by Weta Digital was used in the creation of Kong. Jackson’s Kong brought the character into the era of digital filmmaking by combining natural movements with digital innovation.


In their book Remediation: Understanding New Media, Jay David Bolter and Richard Grusin (2000) discuss how computer graphics depend on our cultural faith in ‘the immediacy of photography.’ The photorealist first asks us to define the photograph as a record of the real before attempting to see how closely they can get us to that reality. Bolter and Grusin assert that professionals in computer graphics and painters both work on the remediation of photography.


Remediation is the central concept that Bolter and Grusin (2000) introduced in their book. When discussing remediation, the authors refer to the propensity for one medium to be reproduced in another. For instance, the first-person camera in cinema is repurposed or refashioned in computer games as an example of new digital media. Therefore, our experience of the Counter-Strike game, for instance, hinges upon our familiarity with the conventions of the first-person camera in cinema. 


Bolter and Grusin (2000) develop the notion of “the double logic of remediation” to explain how new media refashion older media. This double logic pertains to two opposing tendencies: immediacy and hypermediacy. Immediacy is a tendency to deny the presence of a medium, an attempt to make the medium transparent to the observer. Hypermediacy, on the other hand, presents itself as a fascination with the medium, a situation in which the medium is noticeable. For example, hypermediacy relates to our experience with computer screens filled with files, icons, and windows, as well as our experience with browsing web pages, where numerous kinds of media co-exist at the same time, competing to grab our attention. 


Bolter and Grusin (2000) contend that there is an oscillation in which new digital media forms shift from one tendency to the other, from immediacy to hypermediacy or vice versa. This oscillation seems critical to understanding how remediation works, for instance, in photorealistic CGI. Bolter and Grusin continue to claim that photorealism seeks to appear free of all references to itself as a medium and aims to be a representation of reality. Nonetheless, it is obliged to rely on linear perspective, for example, and camera lens visual features and standards. As a result, it shifts from immediacy to hypermediacy.


Computer-generated photorealism has to avoid discrepancies or illusion-breaking flaws. However, CGI is frequently thought to be too ‘perfect.’ Lack of ‘randomness’ is one of the common problems of 3D models in CGI. Fake Illumination, specular reflection, and refraction are other common issues with achieving photorealism in CGI. Often, the colours in CGI are excessively bright, the lighting is too sharp, and there is no feeling of mood and atmosphere. In addition, textures may seem fabricated, especially when it is about skin textures. (Bolter and Grusin, 2000)


Early rendering in computer graphics, such as Turner Whitted’s 1980 introduction of raytracing, gives off a somewhat different impression yet was thought to be photoreal at the time.

rt-whitted.png

Figure 3: Turner Whitted produced this image using an algorithm he devised in 1980.

Although it undoubtedly has certain characteristics of a photographic image, we would scarcely agree that this image is photoreal because it lacks features that are present in our physical world. For example, the glass sphere’s refraction appears too clean, and the shadow is too crisp and unrealistic. In short, the lighting situation seems artificial since there are no refractions, caustics, or indirect reflections and light bounces. In addition to these features, we also see a lack of depth-of-field, which we may attribute to the comparatively limited calculation of physical parameters of light propagation in early computer graphics. The outcome is an excessively clean and unnaturally sharp image. (Flueckiger, 2015)

​

In addition to the characteristics mentioned above that are associated with creating photorealistic computer-generated images, artists use software to simulate the operation and effect of lighting and cameras further in the compositing stage of the VFX pipeline to create photorealism. CGI and digital composites can be enhanced with a variety of analogue artefacts, including noise and grain, depth-of-field, motion blur, lens flares, vignetting, and more. In addition, they can help develop artistic coherence in compositing through the seamless and convincing integration of visual elements from various sources.

​

Camera Lens Artefacts
Since the virtual camera in CGI lacks physical lens configuration and analogue film features to produce a photoreal appearance for the renders, various optical effects can be added individually during the digital compositing process. (Mitchell, 1994) One of them is the depth-of-field, and there are others, such as grain, lens distortion, chromatic aberration, lens flare, vignetting, etc.

​

In the 1990s, computer scientists created algorithms to enhance computer-generated images that appeared sterile and unnatural. (Flueckiger, 2015) They started to develop physical characteristics that could have come from the camera or the photochemical film, such as motion blur, depth of field, film grain, scratches, and dirt. These analogue artefact simulations go beyond simple enhancements of the original images by connecting them to the long-standing traditions of photography and film.

​

The analogue photograph, not the digital one, frequently sets the standard for photorealism. (Gaut, 2010) For instance, because of the silver halides used, classic film has a characteristic called film grain, but digital photos have not. Other features used are common to both traditional and digital photography. For example, motion blur arises when an object has moved noticeably during the exposure time of the photo, and lens flare occurs when some light from a light source bounces off the lens rather than going through it.

​

Grain, Scratches and Dust
Film grain is the random visual texture that can be seen in both photography and film. Tiny ‘grains’ in a photograph result from analogue still and moving image capture methods. They used to be little particles of silver halide - the photosensitive material needed for the chemical film that would randomly appear across photographs. However, as the medium developed and advanced, they were finally reduced.


As cinematographer Janusz Kaminski asserts, “If you can’t see or sense grain in the image, you’re not experiencing the magic of movies,” grains can be considered the core of the cinematic experience. (Flueckiger, 2015, p. 81)


As Barbara Flueckiger (2015) suggests, too much grain is now a sign of historicity in movies, either to indicate an older time period in the narrative or a character’s memory in a flashback. This type of grain effect is best used in conjunction with black-and-white or sepia-toned cinematography. 


Scratches and dust can be linked to film stock’s materiality or perceived as signs of extensive usage and old age of film. For instance, the Curious Case of Benjamin Button (2008) opens with a scene about a clockmaker who designs a clock that ticks backwards after losing his son in the war. (Flueckiger, 2015) The scene was enhanced by scratches, dirt, diffusion, and a colour palette that appeared historical.

​

Depth-of-field (DOF)
Depth-of-field is inextricably linked to aperture, focus distance, focal length and circle of confusion. Wide-open aperture, large diameter, and telephoto lenses typically result in shallow depth of field.


However, depth-of-field (DOF) rendering in CGI was not even achievable until the late 1990s since it requires a systematic link between focus and depth in addition to the blurring of certain parts of the image. Furthermore, as the various pieces are first filmed or rendered in focus, DOF is a crucial issue not just in CGI but also in compositing, where the compositors must carefully create a link between the various image planes and their renderings of focus. (Flueckiger, 2015)


Ron Brinkmann (2008) demonstrates a significant distinction between a basic Gaussian blur—i.e., the computational averaging of neighbouring pixels using a Gaussian distribution—and the photographic quality of out-of-focus regions, known as bokeh, in his classic book The Art and Science of Digital Compositing. 
Bokeh refers to the fuzzy, out-of-focus background effect that results from using a fast lens and the widest aperture while photographing a subject. Therefore, bokeh is the appealing or aesthetically pleasant aspect of out-of-focus blur in a shot.

​

Motion Blur
A motion blur effect is created when a fast-moving object is photographed with a slow shutter speed, making the object’s motion and surrounding area look streaked or smudged.
In CGI, motion blur has to be applied to the images, much like depth-of-field, ideally in conjunction with assessing both the camera’s and the object’s motions. Motion blur and grain are arguably the most common artefact of film recording.

 

Picture1.jpg

Figure 4: ‘Dynamism of a Dog on a Leash’ (1912) by Giacomo Balla.

Italian futurism of the 1910s brought a type of motion blur to art history, as seen, for example, in the above painting by Giacomo Balla. Italian futurists were particularly interested in the dynamics of urban life and were concerned with portraying it. By imitating the sequential unfolding of still pictures, they chose a style of representation associated with cinema’s advent. (Martinique, 2016)

​

Lens Flares
One of the most common analogue artefacts added to CGI is undoubtedly lens flares. Bright light entering a camera lens, reaching the sensor, and then dispersing is known as lens flare in photography. In other words, lens flare is a reaction to a bright light source like the sun, a full moon, or artificial lighting. As a result, it may look like a haze or a starburst in a photograph. Lens flare can also happen when a person or object partially obstructs a strong light source.


Lens flares may be used carefully to enhance the aesthetics of CGI, and since they result from internal reflections in the lens, they indicate the existence of a camera. In addition, they can function on the image’s surface as a suturing tool that joins various image components during compositing. (Flueckiger, 2015)

​

Vignetting
The dimming of the image as it approaches the frame’s edges is known as vignetting. Any optical system will exhibit one of three forms of vignetting: mechanical vignetting, optical vignetting, or natural vignetting. 


Natural vignetting is mainly produced by light reaching the camera sensor at various locations and angles and manifests as a gradual darkening. Wide-angle lenses exhibit the most noticeable vignetting of this kind. Optical vignetting also develops gradually, mainly brought on by the inherent qualities of the lens and happens when the lens barrel blocks light. This type of vignetting is significantly impacted by the lens design and is more noticeable at wider apertures. For mechanical vignetting, matte boxes, filter rings, or other physical obstructions in front of the lens are used to induce the vignetting effect.


Modern lenses render light uniformly, so filmmakers frequently use vignetting to direct the eyes of the viewer toward the centre of the frame and strengthen the overall impression of a picture. 

Practical Research
I further researched this topic, conducting extensive practical research. To do that, I used a range of programmes, including Autodesk Maya, Adobe Substance 3D Painter, Quixel Mixer, Quixel Bridge, and Foundry Nuke. For this project, I employed the following methodology: I planned to examine how utilising camera lens artefacts in Nuke could help attain photorealism. Hence, I first built a CG shot from scratch and rendered it with Arnold Renderer. Then, I composite my shot in Nuke, utilising several methods to enhance the photoreal quality of my composite by replicating lens effects. Then, in order to illustrate the impact those effects made, the original CG render—which was free of camera lens artefacts—was placed next to my improved composite to make a comparison between them.


I initially began using Maya and Substance 3D Painter to model and texture a basic robot. Then, more assets were imported from Quixel Bridge into Maya so I could put out a complete environment. Additionally, I made my own material mix for the ground plane in my scene utilising Quixel Mixer. Arnold Renderer then rendered my CGI. Finally, I concluded by using Nuke to composite the renders of my scene. Thereupon, at the compositing stage of the VFX pipeline, I aimed to investigate camera lens artefacts and demonstrate how they contribute to photorealism. In addition, I paid close attention to realistic lighting, proper colour grading, and other crucial components of producing a good composite.

​

Creating my 3D Scene

Screenshot (1411).png

Figure 5: Development of my robot in Maya

I started modelling my robot using reference images I had imported into Maya. First, a simple polygonal cube was used to establish the modelling procedure. In order to visualise my 3D model and ensure its proportions are accurate, it was handy and beneficial to have reference images in my scene during the 3D modelling process. Next, I had to lay out UVs for my robot in order to complete a crucial step that would allow textures to be made for the asset and advance my robot from the modelling stage of a pipeline to texturing. Finally, I moved to Substance 3D Painter and began creating textures for my model there.

Picture2_edited.png

Figure 6: Model I textured in Substance 3D Painter.

Substance Painter is an industry-standard texturing software that is utilised by renowned studios and texturing artists all over the world. I was able to create textures with its sophisticated masking and procedural texturing features that are far more difficult to develop in solely 2D applications like Photoshop.

Screenshot (1396).png
Screenshot (1395).png

Figure 7: My 3D scene in Maya

I had access to the Megascans library using Quixel Bridge, which allowed me to find and import 3D elements to my project and finalise my Maya scene, as seen in figure 7. Quixel Bridge is a desktop programme which basically is an aid for searching, browsing, downloading, importing, and exporting Megascans assets. Also, I used Quixel Mixer to create a complex and photoreal texture for the ground plane. Quixel Mixer is a free programme that creatively mixes scan data, PBR painting, and procedural textures.

 

AOVs and Setting up Render Layers

Understanding and utilising Arbitrary Output Variables (AOVs) is crucial to compositors as it gives them more flexibility and power. AOVs allow data from a shader or renderer to be output during rendering calculations. In other words, AOVs enable us to render different channels and components of an image, such as diffusion, specular, emission, shadows, etc. Therefore, with AOVs, a compositor can combine all the layers and channels created during the rendering phase to composite the final image. Furthermore, working with AOVs and render passes, for instance, enabled me to isolate specific scene elements and change them as I liked, giving me significant control over the look of different parts of my image.

Screenshot (1398).png

Figure 8: Render passes I set up in Maya

​Arnold Renderer has built-in AOVs, but I could also add my own AOVs based on the requirements of my project. As the figure above shows, I made a custom AOV for ambient occlusion. Ambient occlusion, also called AO, is an algorithm or technique used in 3D computer graphics to determine how exposed each object in a scene is to ambient light from the surrounding environment. In other words, AO attempts to imitate how ambient light would refract off of the scene’s objects. As a result, the deep shadows we may see between edges, gaps, and seams in a render were produced by the AO pass, making a render more realistic.

Screenshot (1397).png

Figure 9: Render layers

Using render layers was an effective approach to render out various parts of a complicated scene like mine. I also found this approach particularly useful for rendering out shadow-only render pass. The layers and collections I made for rendering my 3D scene are displayed in Figure 9. With render layers, I divided the scene into separate render buffers such as geometry, ground and back wall, and most importantly, shadows, and then rendered each layer independently. Render layers, AOVs, and light groups gave me tremendous flexibility and control over many aspects of my CGI render later in compositing.

 

Multi-pass and Light Groups Compositing

I imported the render layers into Nuke. Then, I utilised some approaches to extract the passes from my EXR files and alter them using the techniques illustrated in figure 10.

Screenshot (1398).png

Figure 10: Compositing and manipulating render passes in Nuke.

​There are various advantages to working with render passes and blending them in compositing over other methods. However, one of the most important benefits I discovered was that the multi-pass functionality allowed me to change and fine-tune the appearance of my CGI in compositing without having to re-render the whole 3D scene. As previously stated, while this may appear to be a lot of extra work, it provided me with a great deal of versatility in precisely controlling the aesthetic of my CGI.

Screenshot (1401).png

Figure 11: Compositing the shadow and shadow AO passes and grading them.

Shadows are always crucial in making realistic and professional composites. Creating realistic and physically accurate shadows in compositing allows diverse components from various sources to sit in the image organically and convincingly. That was why I set up a shadow pass to experiment with the shadow properties later on in compositing.

Screenshot (1399).png

Figure 12: Compositing and manipulating light groups in Nuke

Creating light groups was also beneficial to my project. Simply put, a light group is a collection of lights that are rendered independently so that we may manipulate them in compositing. After rendering out different light groups in Maya, I was able to experiment with their intensity and colour in Nuke. To composite and grade the light groups, I used an industry-standard approach shown in figure 12.

 

Depth-of-Field with ZDefocus

Nuke contains a depth-of-filed simulation node called ZDefocus. The ZDefocus node blurs the picture based on a depth map channel. This allows us to simulate the depth-of-field blurring effect.

Screenshot (1400).png

Figure 13: Nuke ZDefocus node

ZDefocus divides the input image into layers. Within a layer, all pixels have the same depth value and blur size. After processing all of the layers of the input image, ZDefocus blends the layers together from the back to the front of the image, with each new layer going over the top of the preceding ones. This allows it to keep the order of the elements in the image.

 

Lens Dirt and Dust

In Nuke, I utilised the techniques below to produce the effects of lens dirt and dust.

Screenshot (1403).png
Screenshot (1407).png

Figure 14: Lens dirt and dust node graphs.

The lens dirt effect was created by applying a merge (plus) node to merge a lens dirt texture over the image. However, for lens dust, I added a noise node and followed the procedure outlined above.

 

Lens Flare

I created realistic lens flares using the flare node that is included in Nuke. Although several plug-ins and gizmos are available for download and installation into Nuke, I constructed my lens flare effect from scratch using only standard Nuke nodes.

Screenshot (1404).png

Figure 15: Lens flare effect I created in Nuke.

The Nuke Flare node replicates lens flares caused by reflections between lenses within a film or video camera when directed at a strong light source such as the sun.

 

Vignetting

The vignetting effect was made in Nuke using the following basic yet efficient approach.​

Screenshot (1405).png

Figure 16: Vignetting effect workflow in Nuke.

I applied a vignette roto to a black constant and blended it over my composite to get the vignette effect.

 

Grain

My image had no grain as it was created digitally in Maya. Therefore, I used a grain node to make an artificial grain effect and apply it to my image.

Screenshot (1406).png

Figure 17: Nuke Grain node.

I added synthetic grain to my composite using the Nuke Grain node. Adding grain at the end of the compositing process is also helpful to ensure that all of the pieces in our composite, including those created digitally, seem like they were shot on the same film stock.

​

The Grain node’s presets offer predetermined grain types, such as Kodak 5248 and Kodak 5218. I used the Kodak 5218 preset for my composite.

Discussion and Conclusion

AS_original.png

Figure 18: Original CGI render without lens artefacts.

AS_RENDER_05_01.png

Figure 19: My composite, with lens artefacts.

During the design and compositing of my image, I experimented with a variety of workflows and approaches, most of which are utilised in the industry by experienced and professional compositors. Putting those workflows together allowed me to realise the power of Foundry Nuke in simulating camera lens effects. It also helped me comprehend those artefacts and their influence on my finalised composite.

​

When I compared the original image with my composite, I concluded that the composited image was far more convincing and relatable. In addition, as I mentioned in the first section of this essay, I realised that the original CGI was far too clean, crisp, and sharp, with no feeling of mood or atmosphere. As a result, adding dirt, dust, particles, and randomness to the composite significantly improved its photorealistic quality.

​

Another thing I spotted while compositing the renders is that camera lens artefacts could effectively contribute to creating visual coherence and integrating different visual elements in the composite. Moreover, some effects, such as vignetting combined with artistic lighting, can draw the viewer’s attention to the image’s main subject, engaging the viewers and eliciting an emotional reaction. Working with AOVs and light groups also helped me realise the incredible potential of the compositing process as the last step in the VFX pipeline. A compositor can use multipass compositing and light AOVs to alter lighting, colours, textures’ look, and the image’s mood even after the often time-consuming process of CGI rendering. Multipass compositing provides enormous flexibility to the compositor while also increasing the speed and efficiency of VFX production.

 

 

References:

Bolter, J.D. and Grusin, R. (2000) Remediation: understanding new media. Cambridge, Massachusetts: The MIT Press.

​

Brinkman, R. (2008) The art and science of digital compositing. 2nd ed. USA: Morgan Kaufmann Publishers.

​

Flueckiger, B. (2015) Special effects: new histories/theories/contexts. Edited by D. North et al. London: Bloomsbury, pp. 78-98.

​

Gaut, B. (2010) A Philosophy of Cinematic Art. Cambridge: Cambridge University Press. doi: 10.1017/CBO9780511674716.

​

Martinique, E. (2016) The Progressive Spirit of Italian Futurism. Available at: https://www.widewalls.ch/magazine/italian-futurism (Accessed: 29/11/2022).

​

Mitchell, W.J. (1994) The Reconfigured Eye: Visual Truth in the Post-Photographic Era. Cambridge, Massachusetts: The MIT Press.

​

'Photo-realism' (2019) Britannica. Available at: https://www.britannica.com/art/Photo-realism (Accessed: 01/12/2022).

​

Prince, S. (2010) ‘Through the Looking Glass: Philosophical Toys and Digital Visual Effects’, Projections, 4(2), pp. 19-40.

​

Wright, S. (2010) Digital compositing for film and video. USA: Focal Press.

 

​

List of Figures

Figure 1: The Plaza (1991) Estes, Richard. Available at: https://www.wsj.com/articles/richard-estes-painting-new-york-city-review-1429568629 (Accessed: 01/12/2022)

Figure 2: King Kong (2005) Jackson, Peter. Available at: https://www.imdb.com/title/tt0360717/mediaindex/?ref_=tt_mv_close (Accessed: 03/12/2022)

Figure 3: Raytracing (1980) Whitted, Turner. Available at: https://www.scratchapixel.com/lessons/3d-basic-rendering/ray-tracing-overview/light-transport-ray-tracing-whitted (Accessed: 03/12/2022)

Figure 4: Dynamism of a Dog on a Leash (1912) Balla, Giacomo. Available at: https://en.wikipedia.org/wiki/Dynamism_of_a_Dog_on_a_Leash (Accessed: 04/12/2022)

Figure 5: Development of my robot in Maya (2022) Myself

Figure 6: Model I textured in Substance 3D Painter (2022) Myself

Figure 7: My 3D scene in Maya (2022) Myself

Figure 8: Render passes I set up in Maya (2022) Myself

Figure 9: Render layers (2022) Myself

Figure 10: Compositing and manipulating render passes in Nuke (2023) Myself

Figure 13: Nuke ZDefocus node (2023) Myself

Figure 14: Lens dirt and dust node graphs (2023) Myself

Figure 15: Lens flare effect I created in Nuke (2023) Myself

Figure 17: Nuke Grain node (2023) Myself

Figure 18: Original CGI render without lens artefacts (2023) Myself

Figure 19: My composite, with lens artefacts (2023) Myself

bottom of page