1 of 16

Challenger 2.0

Jawad Tahir

jawad.tahir@tum.de

The DEBS ‘25 Grand Challenge

and

13.02.2025

2 of 16

The Grand Challenge

2

  • The Grand Challenge (GC) is an annual programming competition organized by the DEBS community
  • A dataset is provided to participants
  • Participants are required to find insights from the dataset
    • Problem statements (queries) are given
  • GC uses Challenger to disseminate the dataset and benchmark participants’ solutions
  • The most performant solution wins.

3 of 16

DEBS ‘25 Grand Challenge

  • Defect Monitoring in Additive Manufacturing
    • Laser Powder Bed Fusion
  • Problem
    • Defects due to porosity
  • Dataset
    • Optical tomography images
    • Indicating temperature
  • Query
    • Find clusters of defected regions in real-time

3

http://pencerw.com/feed/2014/12/4/island-scanning-and-the-effects-of-slm-scanning-strategies

4 of 16

Challenger

4

RPC Service

PostgreSQL

Web portal

Evaluation Infrastructure (VMs)

Participants

Solution

RPC

create_benchmark(conf)

bID

start_benchmark(bID)

next_batch(bID)

batch

result_Q1(ResultQ1)

ack

result_Q2(ResultQ2)

ack

end_benchmark(bID)

ack

Deploy solution

Benchmark

Stores data

See results

5 of 16

Pain points

Challenger 2.0

  • Transmission protocol
    • gRPC
      • Lower reach
  • Database
    • PSQL
      • Slower
      • High maintenance
  • Evaluation Infrastructure
    • Virtual machines
      • Blocks resources
      • Limited number of participants
  • Transmission protocol
    • REST
      • Bigger reach
  • Database
    • MongoDB
      • Better performance
      • Reduce maintenance time
  • Evaluation Infrastructure
    • Kubernetes cluster
      • Dynamic resource allocation removes the limit on number of participants
      • Fault-tolerance evaluations*

5

6 of 16

Challenger 2.0 - Architecture

  • Develop solutions locally
  • Containerize the solution
  • Submit Kubernetes job
  • Uses namespaces to sandbox solutions
  • Defines ResourceQuotas and LimitRanges to ensure fair evaluations
  • Reclaims resources
  • Time Series collections improves performance

6

Web portal

Participants

Submit Kubernetes job

See results

Kubernetes cluster

MongoDB

Solution

REST Service

Solution

Solution

Solution

7 of 16

First steps

7

https://challenge2025.debs.org/

Step 1: Register

Step 2: Login

Step 3: Get API token and namespace

8 of 16

How to benchmark solutions

  • /create
    • Create a benchmark
  • /start
    • Start the evaluation
  • /next_batch
    • Get the next batch of data
  • /result
    • Submit the result of a batch
  • /end
    • End benchmark

8

REST API @ http://challenge2025.debs.org:52923/api

9 of 16

How to create deployment scripts

  • Dockerize the solution
  • Create a Kubernetes job
    • Use the namespace created for you
    • Optional: Can configure resource limits
  • Resource limits per namespace
    • 4 CPU cores
    • 8 GB memory
  • Default limit per container
    • 1 CPU core
    • 2 GB Memory

9

Upload the file @ https://challenge2025.debs.org/deployment/

10 of 16

DEMO

10

11 of 16

Resource Quota - Benchmarking

11

Workload: CPU stress (12 cores)

Resource limit: 1 core

Resource limit: 2 core

Resource limit: 4 core

12 of 16

Planned features

  • Deployment status
  • Deployment logs
  • Dashboard

12

13 of 16

FEEDBACK / REQUESTS / QUESTIONS

13

jawad.tahir@tum.de

14 of 16

See benchmark history

  • See an overview of previous benchmarks

14

15 of 16

How to run solutions on evaluation infrastructure

  • Dockerize the solution
  • Create a Kubernetes job
  • Upload the job YAML
  • Additionally
    • Inject failures in the evaluation infrastructure
    • Two types of failure
      • Network
      • Process
    • Inject failures at fixed interval
  • Delete previous deployments

15

16 of 16

16