Preprint / Version 1

Reproduction and Extension of CASR: A Cache-Based Adaptive Scheduler for Serverless Computing with Novel K=4 Queue Granularity Analysis

##article.authors##

  • Anmol Krishna Kalinga Institute of Industrial Technology

DOI:

https://doi.org/10.31224/7051

Keywords:

serverless computing, cold start, reinforcement learning, PPO, caching, W-TinyLFU, container management, FaaS, cloud computing, reproduction study

Abstract

Serverless computing has emerged as a dominant cloud paradigm where functions are executed on-demand. Cold starts remain a critical performance bottleneck causing delays of 0.1 to 80 seconds per invocation. This paper presents an independent reproduction and experimental extension of CASR (Cache-Based Adaptive Scheduler for Serverless Runtime), originally proposed by Chen et al. CASR combines W-TinyLFU caching with Proximal Policy Optimization reinforcement learning to minimize cold start latency and wasted memory time simultaneously. We evaluate our implementation on the Microsoft Azure Functions 2019 dataset containing 1,332,032 daily invocations against five baseline algorithms across three workload types. CASR eliminates wasted memory time across all evaluated workloads whereas baseline policies incur measurable memory waste. CASR reduces cold start rate by up to 14.929 percentage points compared to FaaSCache. We extend the original work by investigating K=4 queue granularity finding a reduction of up to 5.9 percentage points in cold start rate over the original K=3 design. All implementation code is available at: https://github.com/Krishn4nmol/CASR_Project

Downloads

Download data is not yet available.

Downloads

Posted

2026-05-13