Call For Papers
As we move into a new era of exponential data growth and rising AI-driven demands, the reliable management of large-scale datasets has become one of the most pressing challenges in distributed systems.
With distributed systems now operating at global scales and underpinning essential services and infrastructures, it becomes essential to tackle emerging data consistency, fault tolerance, storage architectures, and adaptive processing issues. Data-centric workloads from AI, IoT, and other modern applications demand efficient management of structured and unstructured data, seamless interoperability with heterogeneous infrastructures (e.g., cloud, HPC), and support for diverse hardware and technologies (e.g., NVMe, CXL, high-end GPUs). To meet these demands, systems must deliver high-throughput data storage, transfer, and analysis at scale without sacrificing performance. Equally important are reproducibility and data traceability capabilities, robust security and privacy enforcement, and deep observability of system behavior.
The RLDM workshop aims to unite experts and practitioners from academia and industry to share recent developments, explore cutting-edge trends, and foster discussion on scientific and technical advances in the reliable management of ever-growing data volumes. We particularly encourage submissions that propose novel solutions, report experimental results, or present early ideas to shape the future of data-driven distributed systems.
Topics of Interest
Topics of interest include, but are not limited to:
- Storage systems for large-scale scenarios, including distributed databases, file systems, key-value stores, and object storage platforms.
- Reliability and consistency in large-scale data management, covering aspects such as fault tolerance, data recovery, and crash consistency.
- Protocols, methodologies, and tools for efficient and scalable data management.
- Monitoring, tracing, and debugging techniques for data-intensive and distributed applications.
- Data security, privacy, governance, and regulatory compliance in diverse and heterogeneous environments.
- Optimizations for large-scale data management, including caching, tiering, prefetching, compression, and deduplication.
- Peer-to-peer systems and large-scale collaborative platforms that facilitate decentralized interaction and data sharing.
- Data management systems to support AI workloads, addressing the specific needs of AI pipelines and datasets.
- AI-driven approaches for enhancing data management, including auto-configurable storage, adaptive solutions, and intelligent resource allocation.
- Emerging hardware and cutting-edge technologies for data management, including Compute Express Link (CXL), persistent memory, and resource disaggregation.
Type of Submissions
We welcome submissions from both academia and industry in the following categories:
- Research Papers (up to 6 pages): Original, unpublished work that may include novel research contributions, system architectures, experimental results, or work in progress with initial validation.
- Fast Abstracts (up to 2 pages): Early-stage research, position papers, or preliminary ideas and findings, intended to stimulate discussion and receive early feedback from the community.
Please see the submission page for information.