Edge environments are inherently resource-constrained, typically consisting of limited machines with restricted CPU and memory capacity. Traditional serverless solutions, such as virtual machine–based or container-based platforms, incur significant memory overhead and are therefore unsuitable for deployment in resource-limited edge settings. While these platforms perform well in large-scale data centers, they significantly reduce function density and introduce substantial cold-start latency when applied to edge environments.
Many edge applications are highly latency-sensitive. For example, in connected vehicle systems, when one vehicle reports a road hazard, nearby vehicles must receive the alert immediately. Similarly, in autonomous driving systems, control-related functions have strict real-time deadlines, whereas infotainment tasks are comparatively lower priority. Since serverless functions exhibit heterogeneous execution times and deadline requirements, the platform must efficiently schedule workloads to ensure deadline compliance despite constrained resources.
However, most existing serverless platforms rely on operating system schedulers such as the Completely Fair Scheduler (CFS) or Earliest Eligible Virtual Deadline First (EEVDF). These schedulers prioritize fairness over deadline awareness by dividing CPU time into small slices and frequently rotating among tasks. While fairness is desirable in general-purpose systems, it is often suboptimal for heterogeneous, latency-critical serverless workloads. Short time slices increase context-switch overhead and degrade performance, whereas long time slices may block short, urgent tasks, leading to missed deadlines.
SledgeScale is a serverless platform specifically designed for resource-constrained edge environments. It leverages WebAssembly for lightweight isolation, kernel-bypass networking (RDMA, DPDK) for low-latency communication, and a customized user-level scheduler (interrupt-driven) for deadline-aware execution. This design achieves strong isolation, low latency, high function density, and scalable performance while ensuring deadline compliance under limited edge resources. For a detailed description of the system design, implementation, and evaluation, please see our paper.
Many emerging applications require near real-time processing and responses that cannot be reliably achieved by today’s cloud infrastructures, thereby necessitating computation at the edge. Serverless computing is a promising architecture for edge environments because it enables fine-grained resource scaling based on application demand.
As edge applications grow more complex, they are increasingly composed of multiple interacting functions or microservices. Such workflows can naturally be represented as Directed Acyclic Graphs (DAGs). However, supporting DAG-based workloads in serverless platforms introduces new challenges in sandbox instantiation, inter-function communication, and scheduling.
In this project, we extend Sledge , a WebAssembly-based serverless runtime, to support efficient execution of DAG functions. Sledge’s lightweight design enables extremely fast sandbox instantiation — under 30 μs per invocation — significantly mitigating cold-start overhead, which is particularly detrimental to DAG workloads.
To avoid expensive coordination through shared storage, we introduce a high-performance in-memory communication channel for propagating intermediate data across the DAG. Furthermore, we incorporate deadline-awareness into DAG execution by providing a pluggable set of user-level schedulers (e.g., EDF and SRSF) to ensure timely completion under latency constraints. For a detailed description, please see our paper.