LEViT∆TE: Working with mixed media in the NFT space
Editorial is open for submissions: [email protected]
If you grew up in the digital era, from the late nineties and on, your vocabulary of artistic expression is most likely somewhat different from generations that preceded your own. For a lot of us, the piano is instead replaced with a DAW, the paint brush, a digital brush on photoshop or illustrator, your film could be anything from a DSLR camera to a DCC (C4D, Blender, Maya etc.) and a render engine. This is the space I personally developed as an artist.
Unlike traditional mediums that our predecessors used to create art, modern digital art gives the collector and spectator the rare ability to experience these pieces through sight, sound and interactivity, sometimes simultaneously, anywhere and at any time they choose. Furthermore, previous to the common use of NFT’s there wasn’t a well known opportunity to effectively collect and trade these digital pieces as a curator. I find multimedia projects like this to be more akin to performance art rather than studio art in the sense of immersion, and for me that is endlessly inspiring. Through this inspiration I personally have used the skills i’ve acquired through the years to achieve a somewhat realistic-surrealism that I often longed for during my own childhood.
This past January, I worked on a project that focused on this phenomenon. Making a point to encapsulate a concept, through digital film, CGI renders, 2D design, digitally produced audio, recorded foley, and an interactive medium using openGL language that many of us know as a .GLTF or .GLB. Through this process the goal was to take a pipeline that is entirely digital and emulate tangible reality to a point that crosses beyond the threshold of feeling as though the piece is imprisoned within a computer, although ultimately it still is.
I decided to name this series “Suspending Disbelief” focusing on the idea of immersion through sight/ sound and interactivity. The final result consists of 3 pieces all centered around an other-worldly character:
- Part one, a fully CGI rendered scene, including a fully cohesive foley backtrack, and a score to match.
- Part two, a filmed scene where the character is later rendered and composited in post, also including a cohesive foley backtrack and another original score to set the scene.
- Then, there’s the third part which is the interactive .glb allowing the viewer and collector to then visualize and examine the details of the character in an interactive format.
Although the possibilities are completely endless, and groundbreaking pieces are released daily, moving forward, I would like to outline my personal experience through a project like this and how I approached this workflow.
Due to the fact, I decided to hand build this character using a process called Kit-bashing I had to visualize a rough idea of the figure of this character before creating and working with actual geometry, so my first course of action was to sit down with my tablet and sketch rough forms using photoshop. I am in no means an illustrator, but achieving an aesthetically pleasing silhouette is an absolute obligation, therefore the quickest way to visualize was to draw it by hand.
Shortly after, following this process, I knew I would have to rig, then animate this character by hand therefore it was time to shape the fundamentals. So I jumped into Cinema 4D. My favorite way of rigging mechanical characters is by using primitive geometry called “controllers” then connecting them through IK chains, which is a virtual chain that connects 2 or more pieces of geometry together, like kinetic joints. During this process, I built very simple geometry in the basic shape of my character, that I could then use like a template to kitbash around. Since these basic shapes were chained together, whatever I connected to them would also move as the primitives would, when I animate my character.
Then came the kitbashing part (also in Cinema 4D). This process is alot like building with legos in 3D, with some hard surface modeling techniques peppered in. I used a few kits, framestock’s “Robot Parts” kit, “Woman Cyber Z26” by Oscar Creativo, and “Free Sci-fi Kitbash” by Radomir3D. These kits included hydraulic pumps, pieces of armour, just general small pieces of robotic parts and machinery, with the exception of “Woman Cyber Z26” where I used the majority of the geometry in that piece, but altered it in ways that better fit the silhouette. This process consisted of logically thinking of how each part would intermingle and functionally work together, as well as fitting an overall shape of the character that made sense. When everything was said and done, I was left with the geometry of the character, functionally rigged and working together.
Next was the most work intensive part, and that was the texturing. During this process my tool was primarily Substance Painter. Through the course of a week, I moved through each part, hand “painting” textures, materials, damage and scratches throughout. The process resulted in 170 hand painted textures for each mechanical part of the character. This is of course, not counting the exception of the pieces from “Woman Cyber Z26” which were only altered using node logic in octane render, because I was quite fond of the texture work done by Oscar Creativo.
At this point, the character is built. Moving back into C4D, I then created a minute long animation that matched my vision, then it was time to composite. Before I moved into designing the full CGI scene, I decided to film and work on the composited scene first. This process consisted of me taking my Sony A7sii, a gimbal stabilizer, a field recorder and a spherical 360 camera to an abandoned site in Los Angeles to set the scene where my character would be found. On sight, before anything, I had to make a spherical HDRI image, so the reflection and lighting of my character would have physically accurate reflections and lighting. This process was done using a Ricoh Theta S. Then after, with my field recorder mounted to my camera, and the camera mounted to the stabilizer, I took a series of shots, doing my best to emulate a first person perspective of someone observing this character in real life.
With the film, audio, and HDRI all shot, it was time to work on camera tracking. This process is extremely tedious and often times frustrating, but in the true nature of art, it was necessary to deal with the bullsh*t. After a few days of banging my head on my desk, the tracking was finished, and I had a virtual camera that matched the movements of the actual filmed shot. It was then time to work on rendering the character and incorporating the lighting as well as extra assets to blend into the scene. After this sequence was rendered, it was then composited onto the shot and color corrected in Adobe After Effects.
The visual from this scene was done, so next was the audio. I then used the filmed audio from the scene to capture my footsteps and the ambient noise. Following that I used my field recorder to record foley of rocks shuffling, metal objects on concrete and extra footsteps. Once all of these assets were compiled, it was time to move into Ableton to work on some simple engineering and processing these sounds together to sound more physically accurate and clean them up. This was also my opportunity to design the supernatural sound that’s found near the end of the piece. Once all of the foley work was sequenced to match the video it was time to create the soundtrack.
The soundtrack for both of the animated pieces was the same as any other score production work i’ve done. Working in ableton, I played some fitting melodies on my midi keyboard, and designed some simple leads as well as a satisfying bassline and a retro futuristic bass stab and viola. I found the result extremely satisfying and fitting for the scene.
Now that the final composited piece was finished, It was time to put my character into a fully digital space. This is where Unreal Engine came in handy. The process of crossing over my character was not as simple as a drag and drop. Alot of conversions and re-mapping of textures needed to happen, but after a tedious day or 2, he was functioning in real time. I then designed a forest using quixel megascan assets, as well as some basic modeling of the landscape, and a lot of tactical placement of trees/ water and props. Then designed the lighting, and finally dropped the character in the scene.
The camera work for Unreal Engine was interesting, because I hadn’t done this process before, but I knew I ultimately wanted to emulate a first person perspective. My way of going about this process was by using my iphone and an app called “Camtrack AR” to record the movement of me walking and filming through my phone. I then took that data and moved it into Unreal Engine, and mapped the coordinates to my virtual camera. The result luckily was exactly what I wanted, but required a bit of cleaning up (due to shaking.)
Incorporating a fully CGI scene gives the unique opportunity of creating an environment and events that would be impossible in the real world, after the fact, I added some more surreal lighting effects and events within the environment that gave a sense of true supernatural events caused by the character (rocks floating, glowing lights in the distance etc.) After everything was rendered out, the process for the audio in this scene mirrored that of the composited scene.
Finally it was time to create the .glb file. With NFTs I love the idea of the collector being able to collect a small trinket, almost like a digital bobblehead or something that allows some further immersion into a character or scene. So for me, it was necessary to make this thing I built into a .glb that was fully AR and VR compatible.
This process consisted of compressing all of my textures, and sizing them down to fit into .glb materials, then remapping all 170 textures to work with openGL language. Another tedious step in the process but necessary. After my materials were looking correct and functioning, I had to optimize the geometry, because the main caveat of the .gltf and .glb formats are the size of the file. For Superrare, if your file is larger than 50mb, you are unable to upload it, which is very practical being that a .glb file larger than 50mb will likely crash your browser.
When everything was said and done, I truly felt all three of these pieces in conjunction were enough for the audience to really get a feel for the character and the nuances of the design. Through general ambience, aesthetic and the overall experience I am hoping to immerse the viewer into taking a step into the conceptual universe surrounding this designed character. I find this process of working in mixed digital media so fascinating because previous to this current digital era, as an independent artist, working on this scale and translating art in such a concise way through sight, sound and interactivity was virtually impossible.
Furthermore in recent times, the NFT industry and overall marketplace gives us the new ability to present these pieces in a virtual gallery setting with other artists, and finally presents a way for collectors to collect ownership of these pieces, and trade them if they so choose. Whereas previously artists would be trapped within the bubble of either expressing this work only for clients or personal use. This concept is not only liberating but redefines value in digital creativity that I believe has grown and will continue to grow similar to the way street art had grown into a major staple in the artistic community in the early 2000’s. The sentiment is commonly heard, but I truly am excited to see every way this industry and community grows, and I know for me personally this takes priority over my own work even as a commissioned artist. I am genuinely so excited to see what the future holds for NFT art, and mixed media digital artists like myself.
SuperRare editor Oli Scialdone considers the social experience of provenance and its relationship with community in the Web3 space.