top of page

Three Tips For Optimizing Cloud-Based Render Resources

Cloud rendering presents clear benefits, primarily the ability to drastically scale at a moment’s notice. This level of flexibility gives studios and artists more time for creative iteration, and removes the headache of navigating a tight render schedule. It also makes it easier to customize compute resources to strike the balance of speed and cost efficiency. Beyond adjusting virtual machine settings, like the amount of RAM and number of cores, there are other tactics to get the most out of your cloud compute. Here are three tips for optimizing cloud-based render resources while using Conductor:

Scout for success

Regardless of compute location, on-prem or in the cloud, renders cannot be successfully completed if the submitted file has a problem. Rendering first-middle-last frames first remains a best practice to stay on budget. Conductor’s integrated scout frame feature allows you to effectively test render a project by checking select slices of it before running the job in full. If your render has 100 frames, you might want to check frames 1, 50, and 100. The rest of the tasks remain on hold until you’ve evaluated the results of the scout render. You don’t have to commit to a full render (and its cost) until you’re certain all the necessary scene elements are present and can ensure you have enough memory selected for a render before submitting. Scout frames also allow you to verify that there aren’t any issues with creative application or platform compatibility, and evaluate select portions of a render, such as the animation. Conductor's integrated DCC submitters have scout frames enabled by default, helping your artists use cloud resources effectively. By testing renders with scout frames often and early, you’ll save time, money, and heartache, and increase the likelihood of production success since you can uncover minor issues before they escalate.

Take a risk for reward

Excess cloud inventory can become available at significantly reduced rates, with the caveat that the resources can be reprioritized for full-paying customers at any moment. Known as Spot Instances on Amazon Web Services and Google Cloud Platform (which has also called them pre-emptible instances), these resources are available for a small fraction of the price of standard on-demand inventory, making them a great choice for fault-tolerant jobs. To be clear, this is not an appropriate path when you’re up against a deadline. Also, it’s best suited for shorter renders. As long as you account for the possibility of some delay before renders can begin, there are enormous savings to be had with these type of instances.

Share upload overhead

Before a render can begin, the render software and scene must be loaded onto the virtual machine. In many cases, this time overhead is trivial. For jobs that take several minutes to load, this overhead can be shared with frame chunking, a process in Conductor by which the virtual machine is used to render several frames within an available render node rather than rendering each frame in a different virtual machine. This approach can slow the total render time, but reduce the overhead load time. A 10-frame job with an 8-minute load time and a 4-minute render time traditionally might take 120 minutes (8-minute load per frame + 4-minute render per frame = 12 minutes per frame x 10 frames). Frame chunking the render would reduce the overall time to 48 minutes (8-minute load just once + 4-minute render per frame x 10 frames). The overhead load time is often seconds, but when it’s longer, frame chunking can accelerate turnaround significantly. It can also be beneficial if you’re using your own application licenses, since you can chunk frames so that your submitted tasks align with your license allocation. Additionally, frame chunking ensures cloud-based renders remain performant for extremely long shots.

For more tips and insight on optimizing cloud-based resources on Conductor, check out our documentation page. Have a topic you’d like us to address? Send us a note at [email protected]

bottom of page