Scaling
Convox provides several approaches to scaling your services, from simple manual adjustments to fully automated event-driven scaling.
Autoscaling
Configure horizontal scaling based on CPU and memory utilization, set initial resource defaults, manually adjust replica counts, and allocate GPUs for accelerated workloads. This is the starting point for most scaling needs.
See Autoscaling for details.
Vertical Pod Autoscaler (VPA)
Automatically right-size CPU and memory requests for your services based on observed usage. VPA adjusts resource allocation per replica rather than changing the number of replicas.
See VPA for details.
AWS only
KEDA Autoscaling
Event-driven autoscaling powered by KEDA. Scale from external signals like SQS queue depth, cron schedules, Datadog queries, CloudWatch metrics, or any of KEDA's 60+ supported scalers. Supports scale-to-zero for cost optimization.
See KEDA Autoscaling for details.
AWS only
Datadog Metrics Autoscaling
Scale services based on business-level metrics from Datadog via HPA external metrics. Useful when scaling decisions depend on request rates, queue depths, or other application-specific signals. If you use KEDA, you can also scale on Datadog metrics via the KEDA Datadog scaler -- see KEDA Autoscaling.
See Datadog Metrics Autoscaling for details.
All providers (requires Datadog Cluster Agent)
Workload Placement
Control which nodes your services run on using custom node groups, node selectors, and dedicated node pools. Use this to isolate workloads, target GPU nodes, or optimize cost by routing services to specific instance types.
See Workload Placement for details.
AWS and Azure