ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
name/surnameaffiliationshort bioemailtitleabstract30min/50min slotRequests/comments
2
Tommaso CucinottaSSSAtommaso.cucinotta@santannapisa.itResults from using SCHED_DEADLINE for energy-aware optimization of RT DAGs on heterogeneous hardwarewe optimize the deployment of randomly generated real-time DAGs with end-to-end deadlines and optionally FPGA-accelerated functions, minimizing the expected power consumption on heterogeneous boards (a big.LITTLE ODROID-XU4 and a Xilinx UltraScale+); the software tasks are scheduled with SCHED_DEADLINE using periods automatically computed by the optimizer; we show how to make the schedule more robust with the same power configuration. We discuss why we needed to swap the order of deadline.c and rt.c among the scheduling classes.
3
Steven RostedtGoogle"Kernel hacker" (short enough for you?)rostedt@goodmis.orgOver commit bandwidth schedulerWe have several workloads where there's a need to set the bandwidth to several tasks that require more CPU but it's not a failure in the system if they do not get it. Ideally, all they need is a period and a runtime. And then they can give up their runtime with a yield, allowing other tasks to use their time (hence the over committing)
4
Steven RostedtGoogle"Kernel hacker" (short enough for you?)rostedt@goodmis.orghierarchical schedulingBrain storming the idea (along with the bandwidth scheduler) to have a scheduling policy of CFS/FIFO/RR inside a bandwith scheduler. That is, to have a group of tasks that the bandwidth scheduler would enable, and then their own scheduling policy would pick the next task. I implemented this while working at Siemens back in 2003-6.
5
Andrea RighiCanonicalKernel engineer (maybe too short)andrea.righi@canonical.comEco-friendly Linux kernel development: minimizing energy consumption during CI/CDIn this talk, we propose a solution to improve energy efficiency during the continuous integration and continuous deployment (CI/CD) of the Linux kernel. This solution focuses on compiling a kernel with bare minimum support to run specific test cases in a virtualized environment, rather than re-compiling and re-deploying a full general-purpose kernel each time. This approach allows to reduce energy consumption during the testing phase, leading to more efficient and sustainable development and debugging practices for the Linux kernel.
6
Joel FernandesGoogleKernel hotline operatorjoel@joelfernandes.orgRewriting RT throttlingIn this talk, we propose a redesign of throttling to better manage CPU cycles and improve RT latency
7
Joel FernandesGoogle Kernel hotline operatorjoel@joelfernandes.orgSaving power by reducing timer interruptsIn this talk, we show experiment results and propose changes that will make timers go easier on the system in lieu of power savings.
8
John StultzGoogleAndroid Systems Teamjstultz@google.comDiscussion around ProxyExecutionDiscussion around the latest proxy-execution patch series, and directions forward.
9
David VernetMetaKernel engineervoid@manifault.comSched Ext: Pluggable scheduling with BPFSched Ext is a new scheduler class which allows scheduling policies to be implemented as BPF programs. More details can be seen in the latest patch set sent to the upstream lists: https://lore.kernel.org/bpf/20230128001639.3510083-22-tj@kernel.org/T/Ideally the presentation / discussion could take place on the 19th, or late in the day on the 18th, to accommodate one of the attendees who will be joining the conference a bit late.
10
Daniel BristotRed Hatkernel engineerbristot@redhat.comdlmon: runtime verification of sched deadline A sort discussion about the usage of temporal language for sched deadline rv
11
Daniel BristotRed Hatkernel engineerbristot@redhat.comsched deadline: should I part or should I not?Discussion about possibilities with semi-partitioned scheduling for hard rea-time scheduling
12
Vincent GuittotLinarokernel engineervincent.guittot@linaro.orgImprove system pressure on cpu capacity feedback to the schedulerThere are more and more sources of pressure on the compute capacity of the CPUs with different time scale. There is currently the thermal pressure signal which monitored and used by the scheduler but this signal can come from various subsytem with different update frequency form khz to dozens of ms. Then, CPUs are more and more often capped in advance by the system to prevent overheat and uncontrolled thermal mitigation.
13
Saravana KannanGooglekernel engineersaravanak@google.comDVFS for VMsThis is an update on progress since my LPC 2022 talk.

The host has no visibility on runqueue util inside a vCPU and the vCPU has no way to properly track the utilizaion of its threads without sufficient information about the host CPU it's running on. This causes poor task placement decisions inside the VM and poor DVFS decisions on the host and results in poor performance/power. We propose a solution that's showing promising results.
14
Gabriele AraSSSAPhD Candidategabriele.ara@santannapisa.itSCHED_DEADLINE meets DVFS: issues and a possible solutionThe theoretical guarantees provided by Global EDF (gEDF) fall short when combining SCHED_DEADLINE with DVFS, especially for platforms characterized by CPU cores with asymmetric capacity, like big.LITTLE or other competing hardware. For these platforms, a much more compelling solution is to adopt a Partitioned EDF (pEDF) scheduling strategy, in which tasks are initially assigned to one core based on their bandwidth and never moved away from it. Partitioned scheduling offers stronger guarantees to DEADLINE-based real-time tasks than gEDF, but it comes with nuances that are hard to solve. As a result, we propose an Adaptively Partitioned EDF (apEDF) implementation for SCHED_DEADLINE, which can provide stronger guarantees than gEDF when used with DVFS and solves some of the issues of pEDF while keeping low the task migrations count. We implemented this different scheduling strategy for SCHED_DEADLINE, and we can show compelling results for its adoption alongside gEDF as an alternative task placement strategy. In addition, we will compare the effectiveness of combining the Schedutil CPUFreq Governor with apEDF or gEDF, showing results both on Intel and ARM-based embedded systems.
15
Lukasz LubaArmkernel engineerlukasz.luba@arm.comDynamic Energy ModelUpdate of my LPC 2022 talk https://www.youtube.com/watch?v=JXNOCaEPuyE
16
Dietmar EggemannArmkernel engineerdietmar.eggemann@arm.comUtilization BoostingMore aggressive DVFS requests for non steady-state graphical pipeline tasks to meet UI frame rendering requirements at 60, 90 and 120 fps.
17
Valentin SchneiderRed Hatkernel engineervschneid@redhat.comIPI deferralStatus update of IPI tracing, and where we are wrt actually deferring some IPIs to next user->kernel transitions
18
Gautham R. ShenoyAMDKernel Engineergautham.shenoy@amd.com
Split L3 scheduling challenges: Odd behaviours of some workloads
This is a follow up to our LPC 2022 talk. In this we shall focus on a couple of different server workloads on a platform with Split-LLC and discuss and the performance degradation caused due to the preemption and task-migration logic in the current scheduler. We will also discuss the solutions that we have tried to mitigate the performance degradation and seek input on the way forward.
19
Gautham R. ShenoyAMDKernel Engineergautham.shenoy@amd.comSched-scoreboardIn this talk we describe a simple toolkit that allows us to capture the scheduler behaviour, both systemwide and per-task wise, with minimal overhead, even for long running workloads. The tool uses the metrics collected by the scheduler via schedstat and exposes them in a human readable format for easy analysis.
20
Giovanni GherdovichSUSEkernel engineerggherdovich@suse.czPreemption modes and virtualized workloadsGiven a workload, what's the best combination of preemption flavors in the host and guest?
Linux v5.12 introduced the boot-time command line parameter "preempt": it's now a lot easier for users to experiment with "none", "voluntary" and "full" on their deployments.
With virtual machines this choice is made twice, on the host and on the guest. We'll present benchmarking data to show the result of this interaction on a few common workloads.
21
Dario FaggioliSUSEVirtualization Engineerdfaggioli@suse.comAutomatic VS. Manual Tuning for VirtualizationVirtualization is known to be a workload that benefits *a lot* from careful manual resources (CPU and memory in particular) partitioning and domain specific tuning, such as exposing the physical topology to the VMs, providing the VMs with direct access to some resources, etc. At the same time, Linux offers a variety of tools and mechanisms aimed at trying to relief the users and the administrators from the burden of having to handcraft all the bits and pieces of resources allocation and performance optimization. These are tools like numad and tuned (just to name a couple operating at user-level) and mechanisms like transparent hugepages and automatic NUMA balancing (when looking down, inside the kernel). Now, granted that manually tailored tuning is always going to provide the best performance, where do the various (combinations of) automatic solutions stand, as compared to that? Are there sweet spots that can be reached by combining the two approaches? What are the directions along which we can improve automatic tuning tools and mechanisms in order to try to bridge the gap? This talk will present experimental benchmark numbers and experimental results, collected in various configurations, that will (hopefully) shed some lights on the current situation and inspire future developments.
22
23
24
25
26
27
28
0
29
0
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100