Last Updated: Sep 10, 2025 For performance and cost benchmarks of running Blender on SaladCloud, please refer to this blog post. To run Blender efficiently on SaladCloud, several factors must be considered, including input data organization and migration, chunked execution, checkpoint management, job queue integration, and Flamenco cluster setup. This guide focuses specifically on input data organization and migration.

Solution Overview

Blender render jobs are GPU-intensive workloads that can take anywhere from a few minutes to several days, depending on scene complexity, resolution, number of samples, and total frame count. Each job requires the main .blend file along with associated assets—such as images, audio, and linked .blend files—whose total size can range from hundreds of megabytes up to several gigabytes or more. Some assets may be shared across multiple .blend files. The output, including rendered frames and optionally final video files, is typically much smaller than the input data. Assuming you already have a local environment (Mac, Linux, or Windows) with the Blender Desktop Application installed and access to shared storage such as NFS, you can build scenes, run renders, and manage packages. Now, you may want to leverage SaladCloud to scale your rendering workflows more cost-effectively. This local environment, referred to as the job client in our Blender solution, remains essential for managing packages, facilitating data migration, packaging and submitting jobs, and retrieving results. To run Blender on SaladCloud, you first need to migrate your input data from the local environment to cloud storage (e.g., Cloudflare R2). The job client then packages jobs that reference this cloud-stored data and include metadata—such as the number of rendered frames and the target location—before submitting them to a job queue, like AWS SQS, GCP Pub/Sub, or Salad Kelpie. The GPU rendering pool on SaladCloud retrieves jobs from the queue, downloads the input data from cloud storage, and processes them. The final rendered outputs are then written back to cloud storage, where the job client can access and retrieve them. For rendering jobs on SaladCloud that run longer than 30 minutes, we recommend implementing checkpoint management:
  • Chunked execution: Start fresh when pulling a new job, and split a large job into smaller subtasks—for example, rendering a few frames at a time.
  • Checkpoint management: Regularly save and upload the job’s progress and intermediate artifacts to cloud storage during execution.
  • Resume capability: Download and resume from the previously saved state to continue unfinished tasks after node reallocation.
To access data in cloud storage, the job client can Rclone, which operates over cloud APIs/SDKs to simplify data management. With Rclone Mount, you can also expose cloud storage as a local filesystem, enabling applications to interact with cloud data through standard filesystem APIs. However, applications running on SaladCloud can currently access cloud storage only via cloud APIs/SDKs; object storage mounts are not supported. You can either use Rclone to handle data transfer or build your application directly on top of the cloud APIs/SDKs. Before migrating data, input files for Blender render jobs can be organized in different ways, depending on the use case:
  • Option 1: Make the main .blend file fully self-contained by packing all external resources (File → External Data → Pack Resources in the Blender Desktop Application), converting linked data to local, and baking or converting simulations so that all data is stored within the file. This ensures the .blend file is fully portable and can be processed on any Salad node without missing assets. The main drawback is that shared data may be duplicated across multiple jobs, increasing the total amount of data uploaded to the cloud.
  • Option 2: Compress the entire project folder—including all .blend files and assets—into a ZIP file and upload it to cloud storage, where it will be downloaded and processed by Salad nodes. However, very large ZIP files (tens of gigabytes) may reduce efficiency and slow processing on SaladCloud.
# 3 Example Projects on the job client
ubuntu@wsl-thinkpad:~$ ls -ls
    4 drwxr-xr-x 4 ubuntu ubuntu     4096 Sep  2 15:57 classroom
    4 drwxr-xr-x 2 ubuntu ubuntu     4096 Sep  3 10:05 junkshop
    4 drwxr-xr-x 2 ubuntu ubuntu     4096 Sep  3 10:05 monster

# The "classroom" project folder contains the main .blend file and all associated assets
ubuntu@wsl-thinkpad:~$ tree classroom/ -L 1
classroom/
├── assets
├── background.png
├── main.blend      # The main .blend file
└── textures

# Compress the "classroom" project folder into a tar.gz file
ubuntu@wsl-thinkpad:~$ tar -czvf classroom.tar.gz classroom

# Upload the tar.gz file from the job client to cloud storage, which can be downloaded and processed by Salad nodes
ubuntu@wsl-thinkpad:~$ ls -ls
70440 -rw-r--r-- 1 ubuntu ubuntu 72127136 Sep  8 11:08 classroom.tar.gz

# Uncompress the tar.gz file to retrieve the original project folder on Salad nodes
ubuntu@wsl-thinkpad:~$ tar -xzvf classroom.tar.gz
  • Option 3 (Recommended): The main .blend file contains references to external dependencies. By parsing the main .blend file and any linked .blend files, we can identify all required dependencies and selectively use them to construct a complete job folder to run Blender. This method allows data to be migrated from the local environment to cloud storage without re-packaging or compression. Once stored in the cloud, assets can be shared across multiple jobs, with each Salad node downloading only the files needed for its specific job. The following sections of this guide focus on this approach.

Migrating Data to Cloud Storage

Follow this link to install Rclone on the job client, then use the provided code to configure Rclone with Cloudflare R2. Once configured, you can run Rclone commands to inspect and transfer data between your local system and the cloud.
# List the contents of specific folder (prefix) in the cloud
rclone lsf r2:BUCKET_NAME/PREFIX_NAME

# Both commands are to copy the local test.txt to the specified folder (prefix) in the cloud
rclone copy test.txt r2:BUCKET_NAME/PREFIX_NAME
rclone copy test.txt r2:BUCKET_NAME/PREFIX_NAME/test.txt
Rclone commands can be used for the initial bulk transfer from your local environment to the cloud. The following command synchronizes a local folder (NFS_MOUNTED_FOLDER) to a folder (prefix) in a cloud bucket, efficiently handling symlinks, parallel uploads, large files, and progress tracking, while staying within API rate limits:
# Assume the NFS_MOUNTED_FOLDER contains 3 subfolders: classroom, junkshop and monster
rclone sync NFS_MOUNTED_FOLDER r2:BUCKET_NAME/PREFIX_NAME \
    --copy-links \
    --transfers=16 \
    --checkers=32 \
    --fast-list \
    --drive-chunk-size 64M \
    --tpslimit 10 \
    --progress
Prior to data migration, ensure the following best practices are met in the NFS_MOUNTED_FOLDER:
  • Dependencies are stored using relative paths.
  • The .blend files may link to other .blend files to reuse assets, and multiple .blend files can share the same dependencies; however, no reference files should be located outside the folder containing the main .blend files.
After the initial bulk data migration, you can mount the cloud storage using Rclone Mount on the job client and access the data through standard filesystem APIs to perform incremental updates or retrieve rendered results. Here is an example:
rclone mount r2:BUCKET_NAME ./LOCAL_FOLDER \
  --vfs-cache-mode full \
  --vfs-cache-max-age 24h
  --vfs-cache-max-size 10G \
  --buffer-size 128M \
  --dir-cache-time 1h \
  --attr-timeout 1m \
  --vfs-read-chunk-size 128M \
  --vfs-read-chunk-size-limit 1G \
  --vfs-links \
  --allow-other \
  -v
With these settings, directory listings are cached for 1 hour. If new files are added to the bucket (e.g., by Salad nodes) during that period, they may not appear in the mount immediately until the caches expire. Cached file reads and writes are stored in ~/.cache/rclone/vfs and retained for 24 hours before being automatically deleted. While cached, rclone may serve these files instead of fetching the latest version from the remote, so stale data could be used until the cache expires. After mounting, you can use standard filesystem commands to interact with the cloud data on the job client, as if it were local:
# The first list is always slow as it fetches data from the cloud, subsequent lists are fast as data is cached locally
ubuntu@wsl-thinkpad:~$ cd ./LOCAL_FOLDER/PREFIX_NAME
ubuntu@wsl-thinkpad:~/LOCAL_FOLDER/PREFIX_NAME$ ls
classroom  junkshop  monster
ubuntu@wsl-thinkpad:~/LOCAL_FOLDER/PREFIX_NAME$ cd classroom
ubuntu@wsl-thinkpad:~/LOCAL_FOLDER/PREFIX_NAME/classroom$ tree -L 1
.
├── assets
├── background.png
├── main.blend
└── textures
You can also use Rclone commands to access the cloud data directly:
rclone lsf r2:BUCKET_NAME/PREFIX_NAME/classroom
rclone lsf r2:BUCKET_NAME/PREFIX_NAME/classroom/main.blend

Generating Job Folders on SaladCloud

Since applications on SaladCloud access storage through cloud APIs/SDKs, the job should specify the path to the main .blend file in cloud storage. For example, if the job is to render the Classroom scene, the main .blend file is:
r2:BUCKET_NAME/PREFIX_NAME/classroom/main.blend
When the job is received, the Salad node downloads the main .blend file from cloud storage using cloud APIs/SDKs. It then parses this file, along with any linked .blend files (which are also downloaded), to identify all required dependencies and their relative paths to the main file (r2:BUCKET_NAME/PREFIX_NAME/classroom). Finally, it can fetch all additional assets (e.g., images, audio) concurrently to assemble a complete job folder (local_classroom) to run Blender.
blender -b local_classroom/main.blend -o output/classroom/frame*##### -F PNG -f 1 -- --cycles-device CUDA
Please refer to the code for a sample implementation of job folder generation, as well as the example of a generated job folder For production workloads, GPU rendering tasks can be separated from I/O tasks using multiple processes and threads. For instance, while the GPU renders one job, the system can simultaneously download and prepare data for the next jobs. This approach improves CPU, I/O, and GPU utilization on Salad nodes. For more details, please refer to this link.