Last Updated: Sep 15, 2025 SaladCloud consists of tens of thousands of globally distributed nodes, primarily high-performance desktop computers and servers running the SaladCloud agent. Each node is equipped with either consumer-grade or data center GPUs, along with varying CPU and memory configurations. Node distribution is uneven across regions and countries: consumer GPU nodes in the US and Canada account for 50–60% of the total, while nearly all data center GPU nodes are currently located in the US. When these devices are idle, SaladCloud leverages them to run workloads by dynamically pulling and executing container images. Once a container group is stopped, the image and any associated runtime data are removed from the allocated nodes, which are then released. Due to its distributed architecture, nodes can vary in distance, latency, network throughput to specific endpoints, startup times, uptimes, and processing capabilities—factors that should be carefully considered when designing applications on SaladCloud.

Startup Times

When a container group starts, its image is first pulled from your registry into SaladCloud’s internal caches in Europe and the US (only once), and then distributed to the allocated nodes. Startup times can range from a few minutes to longer, depending on image size and network conditions. Nodes closer to a cache or with higher throughput may come online sooner. Using smaller images to reduce transfer and decompression time, as well as deploying workloads in specific regions, can further improve startup speed. The 2024 test results show a batch transcription pipeline running on 100 container instances with an 8.36 GB image and across all consumer GPU types and regions. The test began at 22:00 and ran for 10 hours. The metric reflects the system’s actual processing capacity over time, measured as the number of videos downloaded and transcribed per second, with an average video duration of 3,515 seconds. Key observations from this test include:
  • Nodes began coming online, processing requests, and reporting results within 10 minutes of the test start.
  • After 30 minutes, the system reached 70~80% of its maximum measured capacity, transcribing over 2.5 videos per second.
  • The measured processing capacity fluctuated due to variations in video length and changes in the number of running instances caused by nodes going offline and being reallocated.
SaladCloud’s data center nodes are typically deployed near the Internet backbone and offer higher bandwidth and processing capacity, enabling faster startup.

Uptimes

A node can go offline for various reasons after coming online, in which case a new node is allocated to continue processing. The most common reason is that node owners—whether individuals or data center providers—temporarily reclaim their resources for their own use, stopping the sharing. As a result, many nodes may fail quickly, running for only a few minutes after starting. However, nodes tend to remain stable over time, as their owners continue to earn revenue. Priority also plays a role. During periods of high SaladCloud usage, higher-priority container groups can preempt lower-priority ones when a higher-paying job becomes available. The 2024 test results show a workload at batch priority running on 100 container instances across all consumer GPU types and regions for a duration of 7 days:
  • Over 3,000 nodes were utilized, averaging more than 400 nodes per day, with many nodes used multiple times during the period.
  • On average, a node remained online for about 5 hours at a time before going offline.
  • Approximately 34% of nodes exited within the first 60 minutes.
  • More than 30% of nodes ran longer than the average duration of 5 hours.
SaladCloud’s data center nodes are generally more stable when running workloads at high priority and are less likely to be interrupted by their owners, though this cannot be fully guaranteed.

Node Run-to-Request Ratio

Due to variations in node uptimes and startup times, it is not possible to consistently achieve the full requested compute capacity on SaladCloud. When a node goes offline and a replacement is allocated, additional time is required to download and decompress the image before the new node becomes operational. These long and variable startup times also make real-time autoscaling impractical. Results from the same test—100 instances over 7 days—show that both the hourly number of running nodes and reallocated nodes fluctuated. However, on average, 90% of the node run-to-request ratio was achieved. As a rule of thumb, you should always provision ~110% of the required nodes about 30 minutes in advance. This adds little to no extra cost, as instances are only billed while actively running. SaladCloud’s data center nodes provide 8 GPUs per node. While these nodes are generally more stable and less likely to be interrupted, any node going offline removes all 8 GPUs from your resource pool, so you should consider this impact when deploying workloads.

Network Speed, Variance and Throughput

Salad nodes with consumer GPUs often exhibit asymmetric bandwidth, as many operate on residential networks with high download speeds—frequently hundreds of Mbps—but lower upload speeds, sometimes only tens of Mbps. The 2025 test results, based on over 200 consumer GPU nodes performing upload and download tasks, reveal significant speed variance and bandwidth asymmetry. Nevertheless, a substantial number of nodes still provide symmetric bandwidth and strong overall performance. Most SaladCloud’s data center nodes offer symmetric bandwidth, delivering several gigabytes per second in both directions. Round-trip time (RTT) is primarily determined by the geographical distance and underlying network latency between two endpoints, and it plays a critical role in data transfer throughput. Since Salad nodes are globally distributed, nodes with identical network speeds in different regions can exhibit varying throughput to a specific endpoint, such as a cloud storage bucket in a particular location. Transfer tools and algorithms also matter—using chunked and parallel data transfers can better utilize the available end-to-end bandwidth. If your applications require higher throughput with lower latency, it is recommended to perform initial checks and apply custom filters to select nodes that meet your specific network requirements and adopt advanced tools and algorithms . Please check this guide for more information.

Processing Performance Variance and Fluctuations

Nodes with the same GPU type may deliver different performance due to factors such as system configuration (CPU, RAM), clock speed, cooling, and power limits. Node performance may also fluctuate over time due to the shared nature of the consumer GPUs on SaladCloud. When node owners begin using their devices, they are likely to disable the SaladCloud agent, triggering node reallocation. However, if the agent remains active while other applications are running, the performance of workloads could be impacted. The 2025 test results, based on over 300 nodes with the same consumer GPU type performing the same rendering task, show varying processing times that appear to follow at least two distinct underlying distributions. Similar patterns could be observed in other applications, such as image generation and LLM workloads. Performing an initial check and monitoring real-time performance are essential for selecting appropriate nodes and ensuring they remain in an optimal state for application execution.For more details, please refer to this guide. SaladCloud’s data center nodes are fully reserved during workload execution and can consistently deliver stable performance.