<?xml version='1.0' encoding='utf-8' ?>
<!-- Made with love by pretalx v2024.2.0.dev0. -->
<schedule>
    <generator name="pretalx" version="2024.2.0.dev0" />
    <version>0.8</version>
    <conference>
        <title>All Systems Go! 2019</title>
        <acronym>ASG2019</acronym>
        <start>2019-09-20</start>
        <end>2019-09-22</end>
        <days>3</days>
        <timeslot_duration>00:05</timeslot_duration>
        <base_url>https://cfp.all-systems-go.io</base_url>
        <logo>https://cfp.all-systems-go.io/media/ASG2019/img/asg-logo-ondark.svg</logo>
        <time_zone_name>Europe/Berlin</time_zone_name>
        
        
    </conference>
    <day index='1' date='2019-09-20' start='2019-09-20T04:00:00+02:00' end='2019-09-21T03:59:00+02:00'>
        <room name='Loft' guid='f9590e89-4284-5247-b082-43683bed6db0'>
            <event guid='54225488-d685-57fb-9065-481bd2450f5e' id='170'>
                <room>Loft</room>
                <title>Opening</title>
                <subtitle></subtitle>
                <type>Lightning talk</type>
                <date>2019-09-20T09:30:00+02:00</date>
                <start>09:30</start>
                <duration>00:10</duration>
                <abstract>Opening of All Systems Go!</abstract>
                <slug>ASG2019-170-opening</slug>
                <track></track>
                
                <persons>
                    <person id='77'>Chris Kuehl</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/A3KZGD/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/A3KZGD/feedback/</feedback_url>
            </event>
            <event guid='3d439f1d-e67a-5e92-96b6-3a9c5ad30965' id='162'>
                <room>Loft</room>
                <title>Effective infrastructure monitoring with Grafana</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T09:45:00+02:00</date>
                <start>09:45</start>
                <duration>00:40</duration>
                <abstract>In this talk David will show Grafana&apos;s advanced features to manage a fleet of Linux hosts. He will also show relevant metrics and logging datasources and how they can be combined to get a full picture of what is going on.</abstract>
                <slug>ASG2019-162-effective-infrastructure-monitoring-with-grafana</slug>
                <track></track>
                
                <persons>
                    <person id='106'>David Kaltschmidt</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/XJAWA7/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/XJAWA7/feedback/</feedback_url>
            </event>
            <event guid='2dd3d338-37ae-5b15-8292-1f124c0235a4' id='159'>
                <room>Loft</room>
                <title>Traceloop for systemd and Kubernetes + Inspektor Gadget</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T10:30:00+02:00</date>
                <start>10:30</start>
                <duration>00:40</duration>
                <abstract>Presenting [traceloop](https://github.com/kinvolk/traceloop), a &#8220;time travel&#8221; tracing tool to trace system calls in cgroups using BPF and overwritable ring buffers.</abstract>
                <slug>ASG2019-159-traceloop-for-systemd-and-kubernetes-inspektor-gadget</slug>
                <track></track>
                
                <persons>
                    <person id='28'>Alban Crequy</person>
                </persons>
                <language>en</language>
                <description>Many people use the &#8220;strace&#8221; tool to synchronously trace system calls using ptrace. [Traceloop](https://github.com/kinvolk/traceloop) similarly traces system calls but asynchronously in the background, using BPF and tracing per cgroup. I&#8217;ll show how it can be integrated with systemd and with Kubernetes via [Inspektor Gadget](https://github.com/kinvolk/inspektor-gadget).

Traceloop&apos;s traces are recorded in a fast, in-memory, overwritable ring buffer like a flight recorder. As opposed to &#8220;strace&#8221;, the tracing could be permanently enabled on systemd services or Kubernetes pods and inspected in case of a crash. This is like a always-on &#8220;strace in the past&#8221;.

Traceloop uses BPF through the gobpf library. Several new features have been added in gobpf for the needs of traceloop: support for overwritable ring buffers and swapping buffers when the userspace utility dumps the buffer.

https://github.com/kinvolk/traceloop
https://github.com/kinvolk/inspektor-gadget
https://github.com/iovisor/gobpf

Slides: https://docs.google.com/presentation/d/1zIZUrTrD7FkS9pHnWz87ZmoLTrO1g9-J_lDMD7E5kdo/edit</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/98A9LW/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/98A9LW/feedback/</feedback_url>
            </event>
            <event guid='5ca3e1c8-3349-50aa-ba63-f209fcaad3f7' id='146'>
                <room>Loft</room>
                <title>Rootless, Reproducible &amp; Hermetic: Secure Container Build Showdown</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T11:30:00+02:00</date>
                <start>11:30</start>
                <duration>00:40</duration>
                <abstract>How can we build hostile and untrusted code in containers? There are many options available, but not all of them are as safe as they claim to be...</abstract>
                <slug>ASG2019-146-rootless-reproducible-hermetic-secure-container-build-showdown</slug>
                <track></track>
                
                <persons>
                    <person id='31'>Andrew Martin</person>
                </persons>
                <language>en</language>
                <description>Rootless container image builds (as distinct from rootless container runtimes) have crept ever closer with orca-build, BuildKit, and img proving the concept. They are desperately needed: a build pipeline with an exposed Docker socket can be used by a malicious actor to escalate privilege - and is probably a backdoor into most Kubernetes-based CI build farms.

With a slew of new rootless tooling emerging including Red Hat&#8217;s buildah, Google&#8217;s Kaniko, and Uber&#8217;s Makisu, we will see build systems that support building untrusted Dockerfiles? How are traditional build and packaging requirements like reproducibility and hermetic isolation being approached? In this talk we: 
- Detail attacks on container image builds
- Compare the strengths and weaknesses of modern container build tooling
- Chart the history and future of container build projects
- Explore the safety of untrusted builds</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/PVYETJ/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/PVYETJ/feedback/</feedback_url>
            </event>
            <event guid='878e4754-c346-5b93-96d2-1ca8bf5109c3' id='164'>
                <room>Loft</room>
                <title>Reinventing Home Directories</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T12:15:00+02:00</date>
                <start>12:15</start>
                <duration>00:40</duration>
                <abstract>Let&apos;s bring the UNIX concept of Home Directories into the 21st century.</abstract>
                <slug>ASG2019-164-reinventing-home-directories</slug>
                <track></track>
                
                <persons>
                    <person id='78'>Lennart Poettering</person>
                </persons>
                <language>en</language>
                <description>The concept of home directories on Linux/UNIX has little changed in the last  39 years. It&apos;s time to have a closer look, and bring them up to today&apos;s standards, regarding encryption, storage, authentication, user records, and more.

In this talk we&apos;ll talk about &quot;systemd-homed&quot;, a new component for systemd, that reworks how we do home directories on Linux, adds strong encryption that makes sense, supports automatic enumeration and hot-plugged home directories and more.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/VSQRXA/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/VSQRXA/feedback/</feedback_url>
            </event>
            <event guid='99754f29-ce19-55cf-bd1d-07332a6a794e' id='131'>
                <room>Loft</room>
                <title>How Microsoft SQL Server Went Multi-Platform: SQLPAL</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T14:05:00+02:00</date>
                <start>14:05</start>
                <duration>00:40</duration>
                <abstract>How did Microsoft made SQL Server available on Linux, Containers and ARM CPUs? Come hear the story from the SQL Server engineering team.</abstract>
                <slug>ASG2019-131-how-microsoft-sql-server-went-multi-platform-sqlpal</slug>
                <track></track>
                
                <persons>
                    <person id='94'>Argenis Fernandez</person><person id='114'>Brian Gianforcaro</person><person id='117'>Eugene Birukov</person>
                </persons>
                <language>en</language>
                <description>We&apos;d love to tell the story on how we made SQL Server available to ecosystems outside of Windows in this talk. It&apos;s a great story that involves quite a bit of interesting technologies and we&apos;d like to share that with everyone!</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/GTYJFV/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/GTYJFV/feedback/</feedback_url>
            </event>
            <event guid='21aca390-bc0c-5eef-a867-1b57a7ee36ab' id='133'>
                <room>Loft</room>
                <title>Resource control @ Facebook - 2019</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T14:50:00+02:00</date>
                <start>14:50</start>
                <duration>00:40</duration>
                <abstract>Resource control is reaching feature completeness and the focus at facebook is shifting towards productionizing. Let&apos;s go over what feature completeness means and the productionizing efforts.</abstract>
                <slug>ASG2019-133-resource-control-facebook-2019</slug>
                <track></track>
                
                <persons>
                    <person id='45'>Tejun Heo</person><person id='113'>Dan Schatzberg</person>
                </persons>
                <language>en</language>
                <description>Until recently, we never had all the kernel and system features needed to implement work-conserving comprehensive resource control. With the recent additions of senpai, io.weight and cpu.headroom and others, we now have all pieces to implement protection, stacking and side-loading.

Our focus at facebook is gradually shifting towards productionizing resource control so that service owners can obtain high resource reliability and utilization without worrying about the details.

Let&apos;s go over how resource control features come together to form the basic resource profiles and how we&apos;re trying to productionize them.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/KEK3MD/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/KEK3MD/feedback/</feedback_url>
            </event>
            <event guid='259c3989-a0c3-57a0-8515-c10f29613830' id='120'>
                <room>Loft</room>
                <title>Container Live Migration</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T15:35:00+02:00</date>
                <start>15:35</start>
                <duration>00:25</duration>
                <abstract>The difficult task to checkpoint and restore a process is used in many container runtimes to implement container live migration. This talk will give details how CRIU is able to checkpoint and restore processes, how it is integrated in different container runtimes and which optimizations CRIU offers to decrease the downtime during container migration.</abstract>
                <slug>ASG2019-120-container-live-migration</slug>
                <track></track>
                
                <persons>
                    <person id='83'>Adrian Reber</person>
                </persons>
                <language>en</language>
                <description>In this talk I want to provide details how CRIU checkpoints and restores a process. Starting from ptrace() to pause the process, how parasite code is injected into the process to checkpoint the process from its own address space. How CRIU transforms itself to the restored process during restore. How SELinux and seccomp is restored.

I also want to give an overview how CRIU uses userfaultfd for lazy migration and dirty page tracking for pre-copy migration.

I want to end this talk with an overview about how CRIU is integrated in different container runtimes to implement container live migration.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/E88Z7V/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/E88Z7V/feedback/</feedback_url>
            </event>
            <event guid='0ca00408-9684-58d3-a7c3-7d7b6cae6b75' id='151'>
                <room>Loft</room>
                <title>Revamping libcontainer&apos;s systemd driver</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T16:20:00+02:00</date>
                <start>16:20</start>
                <duration>00:25</duration>
                <abstract>In this talk, I&apos;ll go through my efforts to revamp libcontainer&apos;s systemd driver, in particular to support the unified cgroup hierarchy.</abstract>
                <slug>ASG2019-151-revamping-libcontainer-s-systemd-driver</slug>
                <track></track>
                
                <persons>
                    <person id='102'>Filipe Brandenburger</person>
                </persons>
                <language>en</language>
                <description>libcontainer is part of runc (opencontainers/runc in GitHub) and is used by the Docker and containerd ecosystem to spawn containers. This work is trying to bridge the gap between the Docker/containerd/Kubernetes ecosystem and cgroup2 through the unified hierarchy, using systemd as an authoritative container manager. I&apos;ll also touch on alternative approaches (such as crun and systemd-nspawn) and briefly talk about the OCI standard and the need for it to evolve to properly support cgroup2 semantics.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/YPU3HL/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/YPU3HL/feedback/</feedback_url>
            </event>
            <event guid='8617d047-766c-5837-9350-a35c6d29d7cb' id='144'>
                <room>Loft</room>
                <title>Custom cgroup-bpf programs in systemd</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T16:50:00+02:00</date>
                <start>16:50</start>
                <duration>00:25</duration>
                <abstract>The primary focus is to gather feedback from systemd community regarding ongoing and future work to introduce custom cgroup-bpf programs to systemd.
The motivation is to give a user a capability to attach their own cgroup-bpf programs to systemd containers.

This is a continuation of &lt;a href=&quot;https://github.com/systemd/systemd/issues/10227&quot; title=&quot;discussion&quot;&gt; started at ASG2018 and followed by &lt;a href=&quot;https://github.com/systemd/systemd/pull/12151&quot; title=&quot;PR12151&quot;&gt; and &lt;a href=&quot;https://github.com/systemd/systemd/pull/12419&quot; title=&quot;PR12419&quot;&gt;.</abstract>
                <slug>ASG2019-144-custom-cgroup-bpf-programs-in-systemd</slug>
                <track></track>
                
                <persons>
                    <person id='100'>Julia Kartseva</person>
                </persons>
                <language>en</language>
                <description>Currently systemd utilizes BPF macro-assembly which is poorly extendable and maintainable, so the 1st iteration would be introducing `libbpf` library to systemd. The first attempt was made and it raised valid questions about `libbpf` testability and dependencies it introduces. We&#8217;d like to address that.
Another topic of focus may be implementation details, such as how to store libbpf programs: either as bytecode or as restricted C which compiles with the rest of systemd.
For attendees with no context a brief intro to eBPF will be made including new initiatives which may be of use to systemd, e.g. &#8220;Compile once, run everywhere&#8221;.
Since this is ongoing work the agenda may vary depending on activity in PRs.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/M8DVWG/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/M8DVWG/feedback/</feedback_url>
            </event>
            <event guid='faa5c5e3-6a1e-5c6d-bd90-eb270142ec6e' id='132'>
                <room>Loft</room>
                <title>Time-limited login sessions</title>
                <subtitle></subtitle>
                <type>Lightning talk</type>
                <date>2019-09-20T17:20:00+02:00</date>
                <start>17:20</start>
                <duration>00:05</duration>
                <abstract>How Endless are implementing time-limited scopes in systemd, using that to implement time-limited login sessions, and then using that to implement parental controls on the desktop.</abstract>
                <slug>ASG2019-132-time-limited-login-sessions</slug>
                <track></track>
                
                <persons>
                    <person id='68'>Philip Withnall</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/8RB73U/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/8RB73U/feedback/</feedback_url>
            </event>
            <event guid='a738a107-7051-544d-98cf-b6c3adce4a3f' id='167'>
                <room>Loft</room>
                <title>Impact of zstd</title>
                <subtitle></subtitle>
                <type>Lightning talk</type>
                <date>2019-09-20T17:25:00+02:00</date>
                <start>17:25</start>
                <duration>00:05</duration>
                <abstract>Zstandard (zstd) is a new lossless compression algorithm with a very attractive compression rate and performance.  In production environments it comes with some quantifiable benefits but also with some surprising issues.</abstract>
                <slug>ASG2019-167-impact-of-zstd</slug>
                <track></track>
                
                <persons>
                    <person id='110'>Oskari Saarenmaa</person><person id='121'>Ville Tainio</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/DG3YDE/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/DG3YDE/feedback/</feedback_url>
            </event>
            <event guid='40aa2960-6288-5a2f-bf6f-268746f0ecdf' id='157'>
                <room>Loft</room>
                <title>Alternatives to standard utilities</title>
                <subtitle></subtitle>
                <type>Lightning talk</type>
                <date>2019-09-20T17:30:00+02:00</date>
                <start>17:30</start>
                <duration>00:05</duration>
                <abstract>Several of the standard tools like `grep` and `find` have rewritten alternatives, performing the tasks much quicker and have a more intuitive interface. Present some of them.</abstract>
                <slug>ASG2019-157-alternatives-to-standard-utilities</slug>
                <track></track>
                
                <persons>
                    <person id='104'>Paul Menzel</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/JFC7VC/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/JFC7VC/feedback/</feedback_url>
            </event>
            <event guid='0caaa7aa-f6bd-55ba-8ccc-e698afa5c9df' id='149'>
                <room>Loft</room>
                <title>Using RPMs for systemd development</title>
                <subtitle></subtitle>
                <type>Lightning talk</type>
                <date>2019-09-20T17:35:00+02:00</date>
                <start>17:35</start>
                <duration>00:05</duration>
                <abstract>Using RPMs can be very advantageous during development of systemd on Fedora. In order to make that viable, we need to build them from a git checkout and have the ability to use incremental builds.</abstract>
                <slug>ASG2019-149-using-rpms-for-systemd-development</slug>
                <track></track>
                
                <persons>
                    <person id='102'>Filipe Brandenburger</person>
                </persons>
                <language>en</language>
                <description>I will explore tooling I&apos;ve been using and building to use RPMs during systemd development. I&apos;ll quickly cover the motivation and advantages while I manage to build one during a lightning demo.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/JM7GDN/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/JM7GDN/feedback/</feedback_url>
            </event>
            
        </room>
        <room name='Cage' guid='2c5612db-90ce-5572-933d-872b3b59d536'>
            <event guid='9dbd53a6-97d6-55b6-8ef0-e339aae4ff20' id='119'>
                <room>Cage</room>
                <title>Atomic updates and configuration files in /etc</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T09:45:00+02:00</date>
                <start>09:45</start>
                <duration>00:40</duration>
                <abstract>Atomic Updates and user modified configuration files in /etc often lead to hard to resolve conflicts. In this talk, I want to show the most common and biggest problems and possible solutions.</abstract>
                <slug>ASG2019-119-atomic-updates-and-configuration-files-in-etc</slug>
                <track></track>
                
                <persons>
                    <person id='91'>Ignaz Forster</person>
                </persons>
                <language>en</language>
                <description>More and more Linux Distributors have a Distribution using atomic updates to update the system. They all have the problem of updating the files in /etc, as an admin could do changes after the update but before the reboot to activate the updates. But everybody come up with another solution which solves their usecase, but is not generic useable. Additional there is the &quot;Factory Reset&quot; of systemd, which no big distribution has really fully implemented today. A unique handling of /etc for atomic updates could also help to convince upstream developers to add support to their applications, while currently they hesitate to add distribution specific patches and support.

During this talk, I will describe the different areas of problems and possible solutions. The goal is to provide a concept working for all Linux Distributors (like the FHS). My dream is, that no package installs anything in /etc, it should only contain changes made by the system administrator or configuration files managed by the system administrator.

For some problems, it would be already enough today if Linux distributors would adjust the configuration of applications or use all features of them. Other requires minimal to intrusive changes to packages, and for the last kind complete new concepts are necessary.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/KYTCJV/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/KYTCJV/feedback/</feedback_url>
            </event>
            <event guid='3349eaff-53ab-5345-afa9-b6ed4203ce6a' id='175'>
                <room>Cage</room>
                <title>Privacy-Respecting Linux Desktop Monitoring</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T10:30:00+02:00</date>
                <start>10:30</start>
                <duration>00:40</duration>
                <abstract>Whether to support users, ensure their security, or meet compliance goals, organizations need to deploy monitoring of their desktop machines. Yet, many approaches overreach by effectively being rootkits. In this presentation, we&apos;ll examine:

* What data a monitoring system needs to collect
* Where the data we need lives on a modern Linux desktop
* Which data sources expose sandbox-friendly API access
* Sandboxing the monitoring daemon itself</abstract>
                <slug>ASG2019-175-privacy-respecting-linux-desktop-monitoring</slug>
                <track></track>
                
                <persons>
                    <person id='12'>David Strauss</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/3ZKVWF/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/3ZKVWF/feedback/</feedback_url>
            </event>
            <event guid='8f67b425-117b-5575-839f-f6d9e3604f3a' id='117'>
                <room>Cage</room>
                <title>PostgreSQL at low level: stay curious!</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T11:30:00+02:00</date>
                <start>11:30</start>
                <duration>00:40</duration>
                <abstract>Have you ever encountered a transient performance issue, that was hard to
investigate only from the database point of view? On top of how many layers of
abstraction your database is working? What is the difference between running
your database on a bare metal, VM or inside a container?

PostgreSQL does not work in the vacuum, it heavily relies on functionality
provided by an underlying platform. And sometimes to answer these questions
above one needs to step back and look at a problem not only from a database
point of view. In this talk we will discuss how to achieve that, how to tame
such tools as strace, perf or eBPF to troubleshoot intricate issues and stay
curious.</abstract>
                <slug>ASG2019-117-postgresql-at-low-level-stay-curious-</slug>
                <track></track>
                
                <persons>
                    <person id='80'>Dmitrii Dolgov</person>
                </persons>
                <language>en</language>
                <description>Have you ever encountered a transient performance issue, that was hard to
investigate only from the database point of view? On top of how many layers of
abstraction your database is working? What is the difference between running
your database on a bare metal, VM or inside a container?

PostgreSQL does not work in the vacuum, it heavily relies on functionality
provided by an underlying platform. And sometimes to answer these questions
above one needs to step back and look at a problem not only from a database
point of view. In this talk we will discuss how to achieve that, how to tame
such tools as strace, perf or eBPF to troubleshoot intricate issues and stay
curious.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/AXPVZ3/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/AXPVZ3/feedback/</feedback_url>
            </event>
            <event guid='4fb36c1f-144d-54a4-b4de-873384e98c40' id='136'>
                <room>Cage</room>
                <title>Securing Bare Metal Micro Services: Service Mesh</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T12:15:00+02:00</date>
                <start>12:15</start>
                <duration>00:40</duration>
                <abstract>Learn how a Service Mesh can secure your bare-metal (non-virtualized) workloads quickly without any code modifications to improve your security posture.</abstract>
                <slug>ASG2019-136-securing-bare-metal-micro-services-service-mesh</slug>
                <track></track>
                
                <persons>
                    <person id='96'>John Studarus</person>
                </persons>
                <language>en</language>
                <description>Zero Trust is an information security mantra to not implicitly trust any the underlying infrastructure (hardware, network, software, etc). For many organizations, this extends into the cloud where this philosophy is applied to workloads running in public, virtualized clouds. We&apos;ll be taking this philosophy to protect an insecure application, the Fortune Cookie Micro Service, running atop a bare metal cloud with a Service Mesh to provide authentication and encryption of data in motion without the complexities of virtualization or containerization. This walkthrough uses all open source software (Terraform for the deployment atop the Packet bare metal cloud and Consul for the service mesh) atop Ubuntu physical nodes.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/H3YZZM/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/H3YZZM/feedback/</feedback_url>
            </event>
            <event guid='bc369d82-ced8-535d-a824-32b2efdf9528' id='127'>
                <room>Cage</room>
                <title>GNU poke, an extensible editor for structured binary data</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T14:05:00+02:00</date>
                <start>14:05</start>
                <duration>00:40</duration>
                <abstract>GNU poke is a new interactive editor for binary data.  Not limited to editing basic entities such as bits and bytes, it provides a full-fledged procedural, interactive programming language designed to describe data structures and to operate on them.</abstract>
                <slug>ASG2019-127-gnu-poke-an-extensible-editor-for-structured-binary-data</slug>
                <track></track>
                
                <persons>
                    <person id='90'>Jose E. Marchesi</person>
                </persons>
                <language>en</language>
                <description>GNU poke is a new interactive editor for binary data.  Not limited to editing basic entities such as bits and bytes, it provides a full-fledged procedural, interactive programming language designed to describe data structures and to operate on them.  Once a user has defined a structure for binary data (usually matching some file format) she can search, inspect, create, shuffle and modify abstract entities such as ELF relocations, MP3 tags, DWARF expressions, partition table entries, and so on, with primitives resembling simple editing of bits and bytes.  The program comes with a library of already written descriptions (or &quot;pickles&quot; in poke parlance) for many binary formats.

GNU poke is useful in many domains.  It is very well suited to aid in the development of programs that operate on binary files, such as assemblers and linkers.  This was in fact the primary inspiration that brought me to write it: easily injecting flaws into ELF files in order to reproduce toolchain bugs.  Also, due to its flexibility, poke is also very useful for reverse engineering, where the real structure of the data being edited is discovered by experiment, interactively.  It is also good for the fast development of prototypes for programs like linkers, compressors or filters, and it provides a convenient foundation to write other utilities such as diff and patch tools for binary files.

This talk (unlike Gaul) is divided into four parts.  First I will introduce the program and show what it does: from simple bits/bytes editing to user-defined structures.  Then I will show some of the internals, and how poke is implemented.  The third block will cover the way of using Poke to describe user data, which is to say the art of writing &quot;pickles&quot;.  The presentation ends with a status of the project, a call for hackers, and a hint at future works.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/BKXVJQ/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/BKXVJQ/feedback/</feedback_url>
            </event>
            <event guid='1ea14c32-c58c-513c-ab1f-40ac082e985c' id='128'>
                <room>Cage</room>
                <title>Transactional Updates with Btrfs</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T14:50:00+02:00</date>
                <start>14:50</start>
                <duration>00:40</duration>
                <abstract>Transactional updates (also called atomic updates) are a way to update a system without interfering with the currently running system - making this a rock-solid way to update any machine, from embedded systems to cluster nodes.</abstract>
                <slug>ASG2019-128-transactional-updates-with-btrfs</slug>
                <track></track>
                
                <persons>
                    <person id='91'>Ignaz Forster</person>
                </persons>
                <language>en</language>
                <description>What do openSUSE MicroOS, Fedora CoreOS, Chrome OS, Ubuntu Core and Android have in common? All of them are using a *read-only root file system* and so called *transactional / atomic updates* to update a system safely - without having to worry that a broken update could leave your system in some undefined state.

This talk will focus on how to use *btrfs*&apos; snapshot feature to implement such a transactional system and explain where the pitfalls of implementing such a system compared to a traditional read-write system are.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/SXENPK/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/SXENPK/feedback/</feedback_url>
            </event>
            <event guid='2d010334-8e60-5428-b831-b0bf7b3ba6af' id='161'>
                <room>Cage</room>
                <title>Microcontroller Firmware from Scratch</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T15:35:00+02:00</date>
                <start>15:35</start>
                <duration>00:25</duration>
                <abstract>Follow a journey of writing STM32 microcontroller firmware from scratch, using open-source tools.</abstract>
                <slug>ASG2019-161-microcontroller-firmware-from-scratch</slug>
                <track></track>
                
                <persons>
                    <person id='42'>Nikolai Kondrashov</person>
                </persons>
                <language>en</language>
                <description>Follow Nikolay Kondrashov&apos;s journey of learning to write firmware for an STM32 microcontroller (the Blue Pill one) from scratch, using only open-source tools. From blinking LEDs, to controlling a toy car, without the complicated, and license-restricted manufacturer&apos;s libraries, or the comfortable crutches of the Arduino stack. Learn where to look for information, which tools you might need, and how to do it yourself with a similar or a different microcontroller.

See the slides at https://slides.com/spbnick/microcontroller-firmware-from-scratch/</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/JDCVYP/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/JDCVYP/feedback/</feedback_url>
            </event>
            <event guid='dcabb3af-fcac-5e83-a77d-5aa655cd95b3' id='156'>
                <room>Cage</room>
                <title>News from the coreboot land</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T16:20:00+02:00</date>
                <start>16:20</start>
                <duration>00:25</duration>
                <abstract>What happened in the coreboot based firmware world since last year? How to get started?</abstract>
                <slug>ASG2019-156-news-from-the-coreboot-land</slug>
                <track></track>
                
                <persons>
                    <person id='104'>Paul Menzel</person>
                </persons>
                <language>en</language>
                <description>In September, coreboot 4.10 will have been released, and the Open Source Firmware Conference took place. Take this opportunity to present the latest news and changes in the coreboot based firmware world. AMD devices are available with coreboot, and after Google and Puri.sm more vendors like System76 ship their devices with coreboot. While at it, give a quick introduction how to get started.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/UUYNXW/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/UUYNXW/feedback/</feedback_url>
            </event>
            <event guid='b5ead4a4-e2f5-55d1-8ce1-ae27fbab329c' id='124'>
                <room>Cage</room>
                <title>Buildroot : Using embedded tools to build container images</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-20T16:50:00+02:00</date>
                <start>16:50</start>
                <duration>00:25</duration>
                <abstract>The embedded world has dealt with image creation for decades. 
Why not use those decade of experience to reliably create image for the datacenter world ?</abstract>
                <slug>ASG2019-124-buildroot-using-embedded-tools-to-build-container-images</slug>
                <track></track>
                
                <persons>
                    <person id='88'>J&#233;r&#233;my Rosen</person>
                </persons>
                <language>en</language>
                <description>Building an OS image in a reliable, reproducible, tracable and archivable way is a hard problem,  but it is a problem that the embedded world has been working on for decades and where mature and easy to use tools exist

Nowdays, the world of containers is rediscovering these problems and most tools do not provide the level of tracability and reliability needed to be able to properly track the content of an image in every detail and be confident that it is possible to report what changes are local and what licenses are used.

Buildroot is one of the tools the embedded world provides to solve that problem. It is robust, mature, deadly simple to use and can really help getting back the control on container images.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/B7D7BC/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/B7D7BC/feedback/</feedback_url>
            </event>
            
        </room>
        
    </day>
    <day index='2' date='2019-09-21' start='2019-09-21T04:00:00+02:00' end='2019-09-22T03:59:00+02:00'>
        <room name='Loft' guid='f9590e89-4284-5247-b082-43683bed6db0'>
            <event guid='df4ceb70-2c63-538c-b581-e60adc89f261' id='145'>
                <room>Loft</room>
                <title>Distributing Freedesktop SDK applications to Flatpak, Snapd and Docker</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T09:30:00+02:00</date>
                <start>09:30</start>
                <duration>00:25</duration>
                <abstract>BuildStream is used to build Freedesktop SDK for different deployment systems allowing applications based on it to be distributed at once to multiple systems.</abstract>
                <slug>ASG2019-145-distributing-freedesktop-sdk-applications-to-flatpak-snapd-and-docker</slug>
                <track></track>
                
                <persons>
                    <person id='101'>Valentin David</person>
                </persons>
                <language>en</language>
                <description>Flatpak, Snapd and Docker are similar. They are all used for deployment and applications use their own runtime.

Each system has its own tools for development. Flatpak uses Flatpak Builder. Snapd uses Snapcraft. Docker development is based on `Dockerfile`s.

Freedesktop SDK was developed to be the runtime of Flatpak. It used to be partly built with Flatpak Builder. It has since changed to be built with a deployment system agnostic tool: BuildStream. For this reason we can export the Freedesktop SDK to multiple formats.

We will show how it is possible to build an application for the three systems at once.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/CF7FSX/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/CF7FSX/feedback/</feedback_url>
            </event>
            <event guid='dad4ba00-7141-5e7d-af7d-02f492f4b5e5' id='135'>
                <room>Loft</room>
                <title>oomd2 and beyond: a year of improvements</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T10:00:00+02:00</date>
                <start>10:00</start>
                <duration>00:25</duration>
                <abstract>oomd is a userspace out-of-memory killer. This talk covers past, present, and future development along with possible plans for systemd integration.</abstract>
                <slug>ASG2019-135-oomd2-and-beyond-a-year-of-improvements</slug>
                <track></track>
                
                <persons>
                    <person id='51'>Daniel Xu</person><person id='98'>Anita Zhang</person>
                </persons>
                <language>en</language>
                <description>Running out of memory on a host is a particularly nasty scenario. In the Linux kernel, if memory is being overcommitted, it results in the kernel out-of-memory (OOM) killer kicking in. Perhaps surprisingly, the kernel does not often handle this well. oomd builds on top of recent kernel development to effectively implement OOM killing in userspace. This results in a faster, more predictable, and more accurate handling of OOM scenarios.

oomd has gained a number of new features and interesting deployments in the last year. The most notable feature is a complete redesign of the control plane which enables arbitrary but &quot;gotcha&quot;-free configurations. In this talk, Daniel Xu will cover past, present, future, and path-not-taken development plans along with experiences gained from overseeing large deployments of oomd. Anita Zhang will close the talk with a discussion of why oomd would be a great addition to systemd.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/DQX3DH/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/DQX3DH/feedback/</feedback_url>
            </event>
            <event guid='d4f88c04-93ec-56ce-ae97-973a04b96fbd' id='143'>
                <room>Loft</room>
                <title>Building Portable Service Images with Buck</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T10:30:00+02:00</date>
                <start>10:30</start>
                <duration>00:25</duration>
                <abstract>Buck is an opensource build system.  At Facebook, we&#8217;ve taught it to build container images that work with systemd.</abstract>
                <slug>ASG2019-143-building-portable-service-images-with-buck</slug>
                <track></track>
                
                <persons>
                    <person id='52'>Lindsay Salisbury</person>
                </persons>
                <language>en</language>
                <description>At Facebook we use an open-source build system called Buck.  Buck is a build system designed to provide more strong guarantees of incremental builds, reproducibility, and dependency management.  Open-source Buck can now be used to construct fully described and fully self-contained container images that work with systemd! I will show how we use this tool internally at Facebook and how it can be used externally (It&#8217;s open-source!) to build service containers for use by systemd.  I will dive into the the details of how these builds are performed with systemd-nspawn, how we use the Buck system to define the systemd services and their dependencies, and how these images work at runtime.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/K7E7T7/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/K7E7T7/feedback/</feedback_url>
            </event>
            <event guid='355ea6a0-d58b-5a47-a613-312ede6b1859' id='172'>
                <room>Loft</room>
                <title>pidfds: Process file descriptors on Linux</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T11:00:00+02:00</date>
                <start>11:00</start>
                <duration>00:40</duration>
                <abstract>Traditionally processes are identified globally via process identifiers (PIDs). Due to how pid allocation works the kernel is free to recycle PIDs once a process has been reaped. As such, PIDs do not allow another process to maintain a private, stable reference on a process. On systems under pressure it is thus possible that a PID is recycled without other (non-parent) processes being aware of it. This becomes rather problematic when (non-parent) processes are in charge of managing other processes as is the case for system managers or userspace implementations of OOM killers.

Over the last months we have been working on solving these and other problems by introducing pidfds &#8211; process file descriptors. Among other nice properties, the allow callers to maintain a private, stable reference on a process.

In this talk we will look at challenges we faced and the different approaches people pushed for. We will see what already has been implement and pushed upstream, look into various implementation details and outline what we have planned for the future.</abstract>
                <slug>ASG2019-172-pidfds-process-file-descriptors-on-linux</slug>
                <track></track>
                
                <persons>
                    <person id='92'>Christian Brauner</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/TPS8TS/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/TPS8TS/feedback/</feedback_url>
            </event>
            <event guid='b231881d-afaf-56f5-a57a-eaaab6668d79' id='121'>
                <room>Loft</room>
                <title>Squeezing Water from Stone - KornShell in 2019</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T11:55:00+02:00</date>
                <start>11:55</start>
                <duration>00:25</duration>
                <abstract>Despite of it&apos;s old age, ksh still remains one of the most popular shells. In 2013, David Korn and others who worked on ksh were laid off from AT&amp;T Bell Labs. This lead to speculations of death of ksh. In 2017, Siteshwar Vashisht and Kurtis Rader resumed it&apos;s development on GitHub. This talk will be about what makes ksh so challenging to maintain and how new developers are trying to revive it.</abstract>
                <slug>ASG2019-121-squeezing-water-from-stone-kornshell-in-2019</slug>
                <track></track>
                
                <persons>
                    <person id='84'>Siteshwar Vashisht</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/CV9R3N/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/CV9R3N/feedback/</feedback_url>
            </event>
            <event guid='b9883475-56ad-5749-9c65-e178a1e6bbfb' id='123'>
                <room>Loft</room>
                <title>OCIv2: Container Images Considered Harmful</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T12:25:00+02:00</date>
                <start>12:25</start>
                <duration>00:40</duration>
                <abstract>Most modern container image formats use tar-based linear archives to represent root filesystems, which results in many issues when using modern container images. In this talk, we will demonstrate a solution to this problem that we plan to propose for standardisation within the Open Container Initiative (code-named &quot;OCIv2 images&quot;).</abstract>
                <slug>ASG2019-123-ociv2-container-images-considered-harmful</slug>
                <track></track>
                
                <persons>
                    <person id='86'>Aleksa Sarai</person>
                </persons>
                <language>en</language>
                <description>This talk is specific to the Open Container Initiative&apos;s image specification, but the same techniques could be applied to other systems (though we&apos;d obviously recommend using OCI). 

In order to avoid the [numerous issues with tar archives](https://www.cyphar.com/blog/post/ociv2-images-i-tar) it is necessary to come up with a different format. In addition, layer representations result in needless wasted space for storage of files which are no longer relevant to running containers. Massive amounts of duplication are also rampant within OCI images because tar archives are completely opaque to OCI&apos;s content-addressable store.

Luckily the problem of representing a container root filesystem for distribution is very similar to existing problems within backup systems, and we can take advantage of prior art such as [restic](https://restic.net/) to show us how we can get significant space-savings and possibly efficiency savings.

However, we also must ensure that the runtime cost of using this new system is equivalent to existing container images. Container images are efficient at runtime because they map directly to how overlay filesystems represent change-sets as layers, but with some tricks it is possible for us to obtain most of the improvements we also gained in distribution with de-duplication.

Our proposed solution to all of these problems will be laid out, with opportunities for feedback and discussion.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/VMTEPT/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/VMTEPT/feedback/</feedback_url>
            </event>
            <event guid='b0b3fa50-d9db-511a-9fb1-9edb96bd3401' id='142'>
                <room>Loft</room>
                <title>systemd @ Facebook in 2019</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T14:05:00+02:00</date>
                <start>14:05</start>
                <duration>00:25</duration>
                <abstract>We&apos;ll be covering happenings, learnings and new challenges running and supporting systemd in production on the Facebook fleet throughout the past year.</abstract>
                <slug>ASG2019-142-systemd-facebook-in-2019</slug>
                <track></track>
                
                <persons>
                    <person id='7'>Davide Cavalca</person>
                </persons>
                <language>en</language>
                <description>This talk is a followup to [State of systemd @ Facebook](https://cfp.all-systems-go.io/ASG2018/talk/192/) that was presented last year. We&apos;ll cover the latest developments, how we&apos;re leveraging new systemd features, the design of our CI/CD pipeline for systemd, and finally discuss a number of interesting case studies.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/983XHL/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/983XHL/feedback/</feedback_url>
            </event>
            <event guid='53354cbe-c92f-5c48-82e3-194690b28f0c' id='163'>
                <room>Loft</room>
                <title>Boot Loader Specification + sd-boot</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T14:35:00+02:00</date>
                <start>14:35</start>
                <duration>00:40</duration>
                <abstract>The boot loader specification defines a generic drop-in based solution for defining boot targets. sd-boot is a boot loader for UEFI systems, and included in the systemd source tree. In this talk we&#8217;ll have a closer look on the what, the why and the how of the specification and the boot loader.</abstract>
                <slug>ASG2019-163-boot-loader-specification-sd-boot</slug>
                <track></track>
                
                <persons>
                    <person id='78'>Lennart Poettering</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/HFJMLU/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/HFJMLU/feedback/</feedback_url>
            </event>
            <event guid='147efef1-ba80-5748-9655-fb5cd41f61f9' id='126'>
                <room>Loft</room>
                <title>eBPF support in the GNU Toolchain</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T15:20:00+02:00</date>
                <start>15:20</start>
                <duration>00:40</duration>
                <abstract>This talk covers the ongoing effort about adding eBPF support to the GNU Toolchain.  eBPF is a virtual machine running within the Linux kernel; initially intended for user-level packet capture and filtering, eBPF has since been generalized to also serve as a general-purpose infrastructure for non-networking purposes.</abstract>
                <slug>ASG2019-126-ebpf-support-in-the-gnu-toolchain</slug>
                <track></track>
                
                <persons>
                    <person id='90'>Jose E. Marchesi</person>
                </persons>
                <language>en</language>
                <description>This talk covers the ongoing effort about adding eBPF support to the GNU Toolchain.  eBPF is a virtual machine running within the Linux kernel; initially intended for user-level packet capture and filtering, eBPF has since been generalized to also serve as a general-purpose infrastructure for non-networking purposes.

Binutils support is already upstream [1].  This includes a CGEN cpu description, assembler, disassembler and linker.  By the time of the conference a simulator will be available as well, along with GDB support. A GCC backend will be submitted for inclusion upstream before September.

The first part of the talk will be a brief general description of the project, its components, what motivated us to start working on it, and an update on the project&apos;s status at the time of the conference.

Then we will discuss the particular challenges of supporting a target like eBPF:

On one hand, the kernel virtual machine has some unique characteristics that have a definitive impact on the tooling, like the in-kernel validator and the specialized contexts in which eBPF programs run.  We will show how the tools can help improving the eBPF programmer&apos;s experience.

On the other hand, the exact shape of compiled eBPF code is still subject to change, and is in fact rapidly changing and evolving.  Initially quite simple in terms of toolchain needs (single compilation units, no linking) this is changing as more kernel systems are being changed/written to be based on eBPF, and as the in-kernel validator is becoming more and more sophisticated.  Along with bigger and more complex programs comes the need for more abstraction, hence modularity and code reuse.  Kernel hackers are already discussing about bpf-to-bpf calls, run-time linking, and so on. This increased level of ambition and sophistication imposes additional requirements on the tools.

Finally, interoperability with clang/llvm (the other available toolchain supporting eBPF) will be also discussed, in the more general context of ABI and conventions for compiled eBPF, which are still to be (well) defined and documented.

[1] https://sourceware.org/ml/binutils/2019-05/msg00306.html</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/MAYDS8/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/MAYDS8/feedback/</feedback_url>
            </event>
            <event guid='595b1468-e708-5711-9879-404a83be790f' id='125'>
                <room>Loft</room>
                <title>Linux distro should be an upstream contributor too</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T16:30:00+02:00</date>
                <start>16:30</start>
                <duration>00:40</duration>
                <abstract>Come and learn about packit: tooling which enables you to integrate your upstream project into Fedora linux.</abstract>
                <slug>ASG2019-125-linux-distro-should-be-an-upstream-contributor-too</slug>
                <track></track>
                
                <persons>
                    <person id='112'>Martin Sehnoutka</person>
                </persons>
                <language>en</language>
                <description>Imagine a world where Linux distributions provide feedback about using your upstream project back to the project. So that when you are working on a change, you&apos;ll know right away:
* if it builds or a project Z changed API again
* if it works or that your change doesn&apos;t work with older systemd which this distro has
* or if your change breaks components which depend on your project

That&apos;s not all! If we have a service which can do all of this, why not propose a new upstream release automatically as a change to the linux distro once the release is done? Wouldn&apos;t it be awesome if upstream developers could control and track in which version their software is in Fedora 30?

Sounds interesting? Please join us in this session and learn more about the packit tool and the packit service: tooling which makes your dream come true.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/US8XA9/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/US8XA9/feedback/</feedback_url>
            </event>
            <event guid='fd85bb93-e67e-5ecc-ad41-8470024119fe' id='150'>
                <room>Loft</room>
                <title>The state of Thunderbolt on GNU/Linux</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T17:15:00+02:00</date>
                <start>17:15</start>
                <duration>00:25</duration>
                <abstract>A summary of the current state of Thunderbolt, kernel as well as user space, including the latest development where the the input&#8211;output memory management unit (IOMMU) is used to prevent Direct Memory Access (DMA) attacks. A brief explanation and discussion of such such an attack, the recent Thunderclap attacks, will be given including with a focus on how it is related to the IOMMU feature on Linux.</abstract>
                <slug>ASG2019-150-the-state-of-thunderbolt-on-gnu-linux</slug>
                <track></track>
                
                <persons>
                    <person id='60'>Christian Kellner</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/HXLJNF/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/HXLJNF/feedback/</feedback_url>
            </event>
            <event guid='83df2d55-c0b0-502d-b2c7-330d3aa654e7' id='171'>
                <room>Loft</room>
                <title>Closing</title>
                <subtitle></subtitle>
                <type>Lightning talk</type>
                <date>2019-09-21T17:45:00+02:00</date>
                <start>17:45</start>
                <duration>00:15</duration>
                <abstract>Closing of All Systems Go! 2019</abstract>
                <slug>ASG2019-171-closing</slug>
                <track></track>
                
                <persons>
                    <person id='77'>Chris Kuehl</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/WB9TFT/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/WB9TFT/feedback/</feedback_url>
            </event>
            
        </room>
        <room name='Cage' guid='2c5612db-90ce-5572-933d-872b3b59d536'>
            <event guid='3c8f6eff-42ef-5c5d-abfd-00fb58b4d6e8' id='141'>
                <room>Cage</room>
                <title>Coinboot - Cost effective, diskless GPU clusters for blockchain hashing and beyond</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T09:30:00+02:00</date>
                <start>09:30</start>
                <duration>00:25</duration>
                <abstract>How to run clusters for GPU computing based blockchain hashing diskless on cost effective commodity hardware.</abstract>
                <slug>ASG2019-141-coinboot-cost-effective-diskless-gpu-clusters-for-blockchain-hashing-and-beyond</slug>
                <track></track>
                
                <persons>
                    <person id='99'>Gunter Miegel</person>
                </persons>
                <language>en</language>
                <description>Running the nodes of a cluster diskless is quite common in HPC environments. The challenges to run diskless in the context of blockchain hashing for cryptocurrencies are different. There are constraints like to run sufficiently on hundreds of machines with commodity 1 Gbit/s network hardware or the modest RAM size of 4 Gigabyte. This talk will provide insights in the technical approaches that made it possible to run GPU-clusters for blockchain hashing diskless and provide an outlook to  other potential GPU-based use cases beyond blockchain hashing.
I will discuss like how some early userspace trickery and state of the art RAM compression is used. How to handle the modest given RAM size and how a neat toolset based on container-runtimes helps to easily build boot images and plug-in packages. And how to use plug-in packages as an elegant way for adding further software like proprietary GPU drivers to the computing nodes of the clusters.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/XNU7NE/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/XNU7NE/feedback/</feedback_url>
            </event>
            <event guid='06a90eaf-78a4-52d6-b29d-ce47622c4955' id='148'>
                <room>Cage</room>
                <title>Development and testing with lrun</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T10:00:00+02:00</date>
                <start>10:00</start>
                <duration>00:25</duration>
                <abstract>During development and testing it is often needed to test different kernels or run various sets of unit tests quickly. With lrun it is possible to do exactly that. It utilizes existing technology including UML, KVM and Namespaces to facility different environments. It has been in active use for testing Bluetooth and Wi-Fi features on Linux and can be easily extended to other technologies in the future. This presentation will introduce lrun and its design. It will also show demos of its current use cases.</abstract>
                <slug>ASG2019-148-development-and-testing-with-lrun</slug>
                <track></track>
                
                <persons>
                    <person id='15'>Marcel Holtmann</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/N8YRKX/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/N8YRKX/feedback/</feedback_url>
            </event>
            <event guid='8a203dd4-c4d8-51af-b01e-8a199a515c16' id='138'>
                <room>Cage</room>
                <title>Trust is good, control is better - A (short) story about Network Policies</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T10:30:00+02:00</date>
                <start>10:30</start>
                <duration>00:40</duration>
                <abstract>Testing the effectiveness of Kubernetes Network Policies can be done in different approaches. In this talk we will show you the benefits and drawbacks of different approaches and what solution we finally chose.</abstract>
                <slug>ASG2019-138-trust-is-good-control-is-better-a-short-story-about-network-policies</slug>
                <track></track>
                
                <persons>
                    <person id='97'>Maximilian Bischoff</person><person id='107'>Johannes Scheuermann</person>
                </persons>
                <language>en</language>
                <description>Probably everybody who uses Kubernetes in a productive environment with multiple users possibly has looked at policies. Often the operators of the cluster(s) just trust the policies but in some cases it might be useful to control if the policies actually have taken action and often there are just to many Policies in the cluster setup to manually test them all (and obviously you don&#8217;t want to do this). Testing the effectiveness of the Network Policies can be done in different approaches. In this talk we will show you the benefits and drawbacks of different approaches and what solution we finally chose. Also we will show you some other tools and how they complement our solution. As a takeaway you will get an overview of different testing strategies for policies, as well as understanding challenges in testing policies in general and the Kubernetes ecosystem. We will get a feeling that it&#8217;s not always the best idea to just trust other plugins to implement the policies correctly. Our solution is open-sourced under https://github.com/inovex/illuminatio/</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/QXMUUW/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/QXMUUW/feedback/</feedback_url>
            </event>
            <event guid='7bc76c4b-311d-55e4-b60e-1c837b15ed7b' id='147'>
                <room>Cage</room>
                <title>iwd - State of the union</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T11:55:00+02:00</date>
                <start>11:55</start>
                <duration>00:25</duration>
                <abstract>The open source wireless daemon iwd has been introduced about 5 years ago and has seen an active development since its inception. The last year has been focused on behind the scenes work for new Wi-Fi standards that make connection setup faster, make roaming smoother and also introduce new security standards including WPA3. This presentation will demonstrate the new advances in Wi-Fi support for Linux and show how they improve the usage from within Network Manager and other connection managers.</abstract>
                <slug>ASG2019-147-iwd-state-of-the-union</slug>
                <track></track>
                
                <persons>
                    <person id='15'>Marcel Holtmann</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/WBJNQQ/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/WBJNQQ/feedback/</feedback_url>
            </event>
            <event guid='9796777f-c3ca-5018-b6f5-b88e0fc24f6d' id='165'>
                <room>Cage</room>
                <title>BMC management with bmc-toolbox</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T12:25:00+02:00</date>
                <start>12:25</start>
                <duration>00:40</duration>
                <abstract>This talk is about the bmc-toolbox, an open-source project that leverages the _Baseboard Management Controller_ (iLOs iDracs and similar)  to help manage a large fleet (&gt;50K) of bare metal servers at Booking.com

[bmc-toolbox.github.io](https://bmc-toolbox.github.io/)

Its goal is to provide vendor agnostic tooling to manage the lifecycle of bare metal servers,
this talk describes the tools part of bmc-toolbox and various aspects of managing a large fleet of bare metal servers.</abstract>
                <slug>ASG2019-165-bmc-management-with-bmc-toolbox</slug>
                <track></track>
                <logo>/media/ASG2019/images/7WMKLH/bmc-toolbox.png</logo>
                <persons>
                    <person id='108'>Joel Rebello</person><person id='116'>Juliano Martinez</person>
                </persons>
                <language>en</language>
                <description>The bmc-toolbox leverages the _Baseboard Management Controller_ to help manage the lifecycle of datacenter bare metal.  It provides vendor agnostic tools and a library in Go lang to *inventorize*, *configure*, *manage**, **update* a large fleet of bare metal assets with the help of the BMC.

- *bmclib* - A Go lang library that provides a consistent set of methods to interface with BMCs.                                                                                                                   
- *dora* - tool to **inventorize** a fleet of bare metal servers and chassis assets.                                                                                                                               
- *bmcbutler* - tool to handle **configuration management**  for a fleet of bare metal server and chassis BMCs.                                                                                                    
- *actor* - A single **API webservice** endpoint to interact with a fleet of bare metal BMCs.                                                                                                                      
- *bmcldap* - LDAP based **authentication/authorization** service/proxy for BMCs.                                                                                                                                  
- *bmcfwupd* - tool to **update** the BMC firmware.

This talk covers,
- The challenges managing the provisioning and lifecycle of a *not yet hyperscale* size set of bare metal servers.
- The purpose of the tools included of bmc-toolbox, how they help make our lives easier
- How the tooling interacts with the BMCs (vendor specific APIs, Redfish)
- The current state of Redfish in the wild</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/7WMKLH/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/7WMKLH/feedback/</feedback_url>
            </event>
            <event guid='18b9979c-f66d-5009-b3e7-7d5184fd185e' id='140'>
                <room>Cage</room>
                <title>Generating seccomp profiles for containers using podman and eBPF</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T14:05:00+02:00</date>
                <start>14:05</start>
                <duration>00:25</duration>
                <abstract>Currently everyone uses the same seccomp rules for running their containers.  This tool allows us to generate seccomp rules based on what the container actually requires and allows us to lock down the container.</abstract>
                <slug>ASG2019-140-generating-seccomp-profiles-for-containers-using-podman-and-ebpf</slug>
                <track></track>
                
                <persons>
                    <person id='65'>Dan Walsh</person>
                </persons>
                <language>en</language>
                <description>We had a GSOC student this summer  who instrumented podman to allow it to run containers and then genrate the seccomp rules for the container based on the syscalls that the container actually made.  

Once you have this newly generate seccomp file and are satisfied that you have thoroughly tested the container, you can run the container inproduction using the seccomp.json file.

This talk will explain how the tool works and demonstrate it in action.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/ACEWHG/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/ACEWHG/feedback/</feedback_url>
            </event>
            <event guid='04ca5cae-20ff-516b-a362-94c7b2dc5c6a' id='118'>
                <room>Cage</room>
                <title>Yomi - an openSUSE installer based on SaltStack</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T14:35:00+02:00</date>
                <start>14:35</start>
                <duration>00:40</duration>
                <abstract>We will present [Yomi](https://github.com/openSUSE/yomi), a new proposal for installing Linux using [SaltStack](https://github.com/saltstack/salt). This installer is designed to be used in heterogeneous clusters, where you need a bit of intelligence during the installation and be integrated as one more step in the provisioning process.</abstract>
                <slug>ASG2019-118-yomi-an-opensuse-installer-based-on-saltstack</slug>
                <track></track>
                
                <persons>
                    <person id='81'>Alberto Planas Dominguez</person>
                </persons>
                <language>en</language>
                <description>[Yomi](https://github.com/openSUSE/yomi) is a new kind of installer for the [open]SUSE family based on SaltStack and independent of AutoYaST.

The goal of this project is to make the installation of Linux (currently openSUSE) when:

* You have a cluster of heterogeneous nodes (different profiles of memory, storage, CPU and network configurations)
* The installation needs to be unattended
* The installer needs to make decisions based on local profiles and external data
* The installation process needs to be integrated, as one step more, into a more complicated provisioning workflow.

The dependencies of Yomi are minimal, as only Salt and a very few CLI tools are required, which make it ideal to be deployed a booted from PXE Boot.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/KDEYJZ/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/KDEYJZ/feedback/</feedback_url>
            </event>
            <event guid='895baa61-c621-50c3-8b3b-1e69333f76a2' id='155'>
                <room>Cage</room>
                <title>Purely Functional Package Management</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T15:20:00+02:00</date>
                <start>15:20</start>
                <duration>00:40</duration>
                <abstract>Ever experienced a broken system by simply upgrading packages? No more! This talk introduces the purely functional package manager Nix and the advancements all software distributions can benefit from - with some of those already implemented in mainstream package managers like snap.</abstract>
                <slug>ASG2019-155-purely-functional-package-management</slug>
                <track></track>
                <logo>/media/ASG2019/images/AD8VYE/nixos-logo-only-hires.png</logo>
                <persons>
                    <person id='103'>Franz Pletz</person>
                </persons>
                <language>en</language>
                
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/AD8VYE/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/AD8VYE/feedback/</feedback_url>
            </event>
            <event guid='090131ec-ac5c-5b60-85a0-6d080ea4054c' id='166'>
                <room>Cage</room>
                <title>Stateful systems on immutable infrastructure</title>
                <subtitle></subtitle>
                <type>35 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T16:30:00+02:00</date>
                <start>16:30</start>
                <duration>00:40</duration>
                <abstract>Lessons learned operating thousands of stateful production clusters on top of Fedora and systemd-nspawn.</abstract>
                <slug>ASG2019-166-stateful-systems-on-immutable-infrastructure</slug>
                <track></track>
                
                <persons>
                    <person id='109'>Hannu Valtonen</person>
                </persons>
                <language>en</language>
                <description>Aiven is a cloud data platform operating thousands of production clusters on top of different cloud infrastructure providers (e.g. AWS, GCP).  We offer the latest open source database &amp; streaming engines to our users around the world, and implement most of our platform using the latest open source software including Fedora and systemd-nspawn.

We wanted to base our platform on a fast moving Linux distribution like Fedora to gain quick access to new technology and avoid having to backport a lot of things.  Fast moving distributions are typically not supported for a long time, but implementing an immutable infrastructure where deployed machines are not touched afterwards makes it possible to use them in production.

In this talk we&#8217;ll share the details of our architecture and the lessons we&#8217;ve learned as well as problems we&#8217;ve faced over the years operating hundreds of thousands of virtual machines and containers with it on top of six different public clouds.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/RLCDFS/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/RLCDFS/feedback/</feedback_url>
            </event>
            <event guid='47ead120-c630-5091-bf6e-5b72c6f98ec8' id='134'>
                <room>Cage</room>
                <title>Senpai - Automatic memory sizing for containers</title>
                <subtitle></subtitle>
                <type>20 min talk + 5 min Q&amp;A</type>
                <date>2019-09-21T17:15:00+02:00</date>
                <start>17:15</start>
                <duration>00:25</duration>
                <abstract>Senpai is a userspace tool to auto-tune cgroup memory limits.</abstract>
                <slug>ASG2019-134-senpai-automatic-memory-sizing-for-containers</slug>
                <track></track>
                
                <persons>
                    <person id='95'>Johannes Weiner</person>
                </persons>
                <language>en</language>
                <description>Due to virtual memory and optimistic caching strategies, true memory consumption of an application, and true utilization of a system&apos;s RAM, are mostly unknowns on modern operating systems. This has always made memory provisioning a tough and error-prone trial-and-error task, but it&apos;s aggravated with containerization, where the stated goal is thinner margins and higher resource efficiency.

Senpai is a userspace tool that harnesses recently developed Linux kernel features to automatically shrink cgroups to their smallest possible memory size without notably affecting the performance of the contained applications.

This talk goes over the motivation to develop senpai, how it works, and success stories from the Facebook fleet.</description>
                <recording>
                    <license></license>
                    <optout>false</optout>
                </recording>
                <links></links>
                <attachments></attachments>

                <url>https://cfp.all-systems-go.io/ASG2019/talk/TCBLRG/</url>
                <feedback_url>https://cfp.all-systems-go.io/ASG2019/talk/TCBLRG/feedback/</feedback_url>
            </event>
            
        </room>
        
    </day>
    <day index='3' date='2019-09-22' start='2019-09-22T04:00:00+02:00' end='2019-09-23T03:59:00+02:00'>
        
    </day>
    
</schedule>
