FOSDEM 2018

FOSDEM 2018
2018-02-03 - 2018-02-04

Days:

Back

Day 1 03.02.2018

Home
Back

Day 2 04.02.2018

Home
Back

Security and Encryption / Space / Keynotes [Room Janson]

Home
Back

Miscellaneous / Community [Room K.1.105 (La Fontaine)]

Home
Back

Lightning Talks [Room H.2215 (Ferrer)]

Home
Back

Decentralised Internet and Privacy [Room H.1301 (Cornil)]

Home
Back

HPC, Big Data, and Data Science [Room H.1302 (Depage)]

Home
Back

MySQL and Friends [Room H.1308 (Rolin)]

Home
Back

Real Time Communications [Room H.1309 (Van Rijn)]

Home
Back

Software Defined Storage [Room H.2213]

Home
Back

Rust [Room H.2214]

Home
Back

BOFs (Track A - in H.3227) [Room H.3227]

Home
Back

Software Defined Radio [Room AW1.120]

Home
Back

DNS / Retrocomputing [Room AW1.121]

Home
Back

Internet of Things [Room AW1.125]

Home
Back

Geospatial [Room AW1.126]

Home
Back

Distributions [Room K.3.201]

Home
Back

LLVM Toolchain [Room K.3.401]

Home
Back

Open Source Design [Room K.4.201]

Home
Back

Hardware Enablement [Room K.4.401]

Home
Back

Perl Programming Languages [Room K.4.601]

Home
Back

Config Management [Room UA2.114 (Baudoux)]

Home
Back

Legal and Policy Issues [Room UA2.220 (Guillissen)]

Home
Back

Certification [Room UB2.147]

Home
Back

Embedded, mobile and automotive [Room UB2.252A (Lameere)]

Home
Back

Tool the Docs / Source Code Analysis [Room UD2.119]

Home
Back

Containers [Room UD2.120 (Chavanne)]

Home
Back

Virtualization and IaaS [Room UD2.218A]

Home
Back

Keysigning [Room UD2.Corridor]

Home
Back

Global Diversity CFP Day [Room J1.106]

Home
Back

Global Diversity CFP Day [Room K.Level.2]

Home
Back

Keynotes / History / Performance [Room Janson]

Home
Back

Python / Miscellaneous [Room K.1.105 (La Fontaine)]

Home
Back

Lightning Talks [Room H.2215 (Ferrer)]

Home
Back

SDN and NFV [Room H.1301 (Cornil)]

Home
Back

PostgreSQL [Room H.1302 (Depage)]

Home
Back

Go [Room H.1308 (Rolin)]

Home
Back

Open Media [Room H.1309 (Van Rijn)]

Home
Back

Testing and automation [Room H.2213]

Home
Back

Graph Processing [Room H.2214]

Home
Back

BOFs (Track A - in H.3227) [Room H.3227]

Home
Back

Open Document Editors [Room AW1.120]

Home
Back

Debugging tools [Room AW1.121]

Home
Back

Ada [Room AW1.125]

Home
Back

Microkernels [Room AW1.126]

Home
Back

Package Management [Room K.3.201]

Home
Back

BSD [Room K.3.401]

Home
Back

CAD and Open Hardware [Room K.4.201]

Home
Back

Graphics [Room K.4.401]

Home
Back

Community devroom [Room K.4.601]

Home
Back

Mozilla [Room UA2.118 (Henriot)]

Home
Back

Certification [Room UB2.147]

Home
Back

Virtualization and IaaS [Room UB2.252A (Lameere)]

Home
Back

Identity and Access Management [Room UD2.119]

Home
Back

Monitoring and Cloud [Room UD2.120 (Chavanne)]

Home
Back

Free Java [Room UD2.208 (Decroly)]

Home
Back

Embedded, mobile and automotive [Room UD2.218A]

Home
Back

Global Diversity CFP Day / BOFs (Track C - in J1.106) [Room J1.106]

Home
Back

Sancus 2.0: Open-Source Trusted Computing for the IoT

Home

Speaker Jan Tobias Muehlberg
RoomJanson
TrackSecurity and Encryption
Time10:00 - 10:50
Event linkView original entry

I will talk about Trusted Computing, what it can do, where the limitations are, and why we need trusted computing architectures to be open-source. Special emphasis will be on the Sancus architecture, which brings Trusted Computing to embedded domains such as the IoT or safety-critical control systems.

An important but often neglected safety aspect of our society's critical infrastructure involves the security of embedded software in contexts such as the Internet of Things, smart cities or the smart grid. In this talk I will present an approach to the security-conscious design of embedded control systems. Our work is based on Sancus, a lightweight Trusted Computing platform and Protected Module Architectures (PMA, think of Intel SGX, but for 16-bit MCUs) and guarantees authenticity, integrity and confidentiality properties of event-driven distributed embedded applications.



Relying on a hardware-only Trusted Computing Base (TCB), Sancus makes it possible to protect individual software modules of an application against attacks from other modules or even from a malicious or misbehaving operating system. Component isolation further leads a reduction of the size of these system's (security-) critical software stack, which allows us to analyse, test and even to formally verify critical modules in isolation.



We have worked on a number of compelling use cases for this technology, focusing on smart grid infrastructure and on an AUTOSAR-compliant security framework for automotive bus systems. In these scenarios we observe a substantial reduction of the runtime software TCB (from 50 kLOC to less than 1 kLOC) while maintaining real-time responsiveness and adding protection against a wide range of network and software attacks.



Sancus is the only open-source Trusted Computing architecture currently available; hardware descriptions as well as infrastructure software are freely available. We believe that this is essential to allow for the independent validation of the underlying security primitives and to
establish trust in the platform and in systems built on top of it.

Back

Using TPM 2.0 As a Secure Keystore on your Laptop

Home

Speaker James Bottomley
RoomJanson
TrackSecurity and Encryption
Time11:00 - 11:50
Event linkView original entry

For decades, all laptops have come with a TPM. Now with Microsoft forcing the transition to the next generation, Linux faces a challenge in that all the previous TPM 1.2 tools don't work with 2.0. Having to create new tools for TPM 2.0 also provides the opportunity to integrate the TPM more closely into our current crypto systems and thus give Linux the advantage of TPM resident and therefore secure private keys. This talks will provide the current state of play in using TPM 2.0 in place of crypto sticks and USB keys for secure key handling; including the algorithm agility of TPM 2.0 which finally provides a support for Elliptic Curve keys which have become the default recently.

This talk will provide an overview of current TSS (Trusted computing group Software Stack) for TPM 2.0 implementation on Linux, including a discussion of the two distinct Intel and IBM stacks with their relative strengths and weaknesses. We will then move on to integration of the TSS into existing crypto system implementations that allow using TPM resident keys to be used with common tools like openssl, gnutls, gpg, openssh and gnome-keyring. We will report on the current state of that integration including demonstrations of how it works and future plans. The ultimate goal is to enable the seamless use of TPM resident keys in all places where encrypted private keys are currently used, thus increasing greatly the security posture of a standard Linux desktop.

Back

Data integrity protection with cryptsetup tools

Home

Speaker Milan Broz
RoomJanson
TrackSecurity and Encryption
Time12:00 - 12:50
Event linkView original entry

The talk describes the architecture of data integrity protection with cryptsetup on Linux systems and the following steps that need to be achieved to have encrypted block-level authenticated storage.

The full disk encryption is a well-known way to achieve confidentiality of data. Unfortunately, it usually does not provide any integrity protection of data because of its length-preserving nature (plaintext is the same size as ciphertext; there is no space for data integrity tags).
Since Linux kernel 4.12 and cryptsetup2 we can configure new Linux kernel dm-integrity and dm-crypt devices that support data integrity protection over block devices (by emulating sector data integrity extensions over standard disks).
We will explain the architecture of such integrity-protected block devices (with the support of new integritysetup tool) and also a possibility to use cryptographically sound data integrity protection (authenticated encryption) in combination with disk encryption.
We will also shortly introduce new LUKS2 on-disk format that is designed to integrate these features into existing Linux disk encryption toolset easily.

Back

Inside Monero

Home

Speaker Howard Chu
RoomJanson
TrackSecurity and Encryption
Time13:00 - 13:50
Event linkView original entry

While many in the tech industry are familiar with Bitcoin and the many altcoins that have forked off its code base, fewer are aware of Monero. Monero is evolved from CryptoNote, which is a completely independent code base from Bitcoin emphasizing privacy as opposed to Bitcoin's transparent blockchain. Like Bitcoin, Monero is fully open source. The project is driven entirely by volunteers. The Monero team has developed further innovations over the CryptoNote code base making it the most private cryptocurrency ever, and the first in the world with true fungibility.

The talk includes a brief introduction to cryptocurrencies and their strengths and weaknesses. The main point of the talk is the additional features Monero has incorporated to preserve privacy and the consequences of using non-private cryptocoins.

Back

Security Theatre

Home

Speaker Markus Feilner
RoomJanson
TrackSecurity and Encryption
Time14:00 - 14:50
Event linkView original entry

Security is but a feeling. A feeling that the admin has when he's leaving at beer o'clock. And mostly the biggest risks for the assets he has to protect are not of technical nature.

This talk is full of examples, findings and revelations of twenty years of training, consulting, and being an investigative journalist. From passwords to Kerberos servers, from VPNs to the Dark net and anonymity, the hack in the German Bundestag and why it will happen again. Why Google is afraid, why modern hardware sucks, why most VPN services are not worth a cent. How to circumvent the great firewall of China. Eight years in Journalism have given me lots of anecdotes to tell.



All these stories have one thing in common: The biggest security risk sits in front of the computer. OSI Layer 8, and pretty often you can achieve more working on this layer. Almost always the same amount of time and money is better invested here, but there's so many myths around. Oh, and did I mention why responsible disclosure is bullshit? And of course, management comes in. I will explain the terms "Blameware" and "backdoor-friendly" (a retronym for proprietary).

Back

SatNOGS: Crowd-sourced satellite operations

Home

Speaker Nikos Roussos
RoomJanson
TrackSpace
Time15:00 - 15:50
Event linkView original entry

An overview of the SatNOGS project, a network of satellite ground station around the world, optimized for modularity, built from readily available and affordable tools and resources.

We love satellites! And there are thousands of them up there. SatNOGS provides a scalable and modular platform to communicate with them. Low Earth Orbit (LEO) satellites are our priority, and for a good reason. Hundreds of interesting projects worth of tracking and listening are happening in LEO and SatNOGS provides a robust platform for doing so. We support VHF and UHF bands for reception with our default configuration, which is easily extendable for transmission and other bands too.



We designed and created a global management interface to facilitate multiple ground station operations remotely. An observer is able to take advantage of the full network of SatNOGS ground stations around the world.

Back

The story of UPSat

Home

Speaker Pierros Papadeas
RoomJanson
TrackSpace
Time16:00 - 16:50
Event linkView original entry

During 2016 Libre Space Foundation a non-profit organization developing open source technologies for space, designed, built and delivered UPSat, the first open source software and hardware satellite.

The presentation will be covering the short history of Libre Space Foundation, our previous experience on upstream and midstream space projects, how we got involved in UPSat, the status of the project when we got involved, the design, construction, verification, testing and delivery processes. We will also be covering current status and operations, contribution opportunities and thoughts about next open source projects in space. During the presentation we will be focusing also on the challenges and struggles associated with open source and space industry.

Back

Exploiting modern microarchitectures

Home

Speaker Jon Masters
RoomJanson
TrackKeynotes
Time17:00 - 17:50
Event linkView original entry

Recently disclosed vulnerabilities against modern high performance computer microarchitectures known as 'Meltdown' and 'Spectre' are among an emerging wave of hardware-focused attacks. These include cache side-channel exploits against underlying shared resources, which arise as a result of common industry-wide performance optimizations.



More broadly, attacks against hardware are entering a new phase of sophistication that will see more in the months ahead. This talk will describe several of these attacks, how they can be mitigated, and generally what we can do as an industry to bring performance without trading security.

Back

Closing FOSDEM 2018

Home

Speaker FOSDEM Staff
RoomJanson
TrackKeynotes
Time17:50 - 18:00
Event linkView original entry

Some closing words. Don't miss it!

Back

Cyborg Teams

Home

Speaker Stef Walter
RoomK.1.105 (La Fontaine)
TrackMiscellaneous
Time10:00 - 10:50
Event linkView original entry

A purely human team does not scale past a certain complexity point. In the Cockpit project we’ve done something amazing: We’ve built a team that’s part human and part machine working on an Open Source project.

In the Cockpit project we’ve done something amazing: We’ve built a team that’s part human and part machine working on an Open Source project. “Cockpituous”, our project’s #5 commit contributor, is actually our automated team members.



Bots do the mundane tasks that would otherwise use up the time of human contributors. During the talk you can see them self-organizing, doing continuous integration, finding issues, contributing code changes, making decisions, releasing the software into Linux distros and containers. They work in a completely distributed, organic way, and run in containers on Kubernetes.



We’ll talk about how humans are training the bots, and how bots are using machine learning to learn from the humans. The project and its pace would be unthinkable otherwise.



Treating the bots as team members is fundamental to achieving this. I’m excited to show you how to pull that off.

Back

Running Android on the Mainline Graphics Stack

Home

Speaker Robert Foss
RoomK.1.105 (La Fontaine)
TrackMiscellaneous
Time11:00 - 11:50
Event linkView original entry

Finally, it is possible to run Android on top of mainline Graphics! The recent addition of DRM Atomic Modesetting and Explicit Synchronization to the kernel paved the way, albeit some changes to the Android userspace were necessary.



The Android graphics stack is built on a abstraction layer, thus drm_hwcomposer - a component to connect this abstraction layer to the mainline DRM API - was created. Moreover, changes to MESA and the abstraction layer itself were also needed for a full conversion to mainline.



This talk will cover recent developments in the area which enabled Qualcomm, i.MX and Intel based platforms to run Android using the mainline graphics stack

Back

Re-structuring a giant, ancient code-base for new platforms

Home

Speaker Michael Meeks
RoomK.1.105 (La Fontaine)
TrackMiscellaneous
Time12:00 - 12:50
Event linkView original entry

Come and hear how LibreOffice is meeting the challenges of new platforms both hardware and software, adapting to new constraints & shining.

LibreOffice has faced a lot of challenges recently, from hardware providing ever more threading resource to the need to provide an online version in the browser. These bring some interesting solutions.



Come and hear the story of how we're bringing the code-base up-to-date, from providing a ~constant-time threaded XML parser, to parallelising the guts of Calc's calculation engine.



Catch the latest in Online innovation, optimization and scalability work as well as its growing integration with lots of other Open Source projects. Hear how we're starting to bring dialogs both to your browser, and to your native Linux desktop toolkit.



Finally catch up with the latest and greatest feature/function improvements coming in LibreOffice 6.0 - our next major release timed to co-incide with FOSDEM release, and find out how you can best get involved with LibreOffice.

Back

OpenADx – xcelerate your Automated Driving development

Home

Speaker Lars Geyer-Blaumeiser
RoomK.1.105 (La Fontaine)
TrackCommunity
Time13:00 - 13:50
Event linkView original entry

The Bosch Automated Driving division faces among many the challenge that the toolchain for developing automated driving solutions becomes more complex by every level of automation. Well established tools are not well integrated for the use cases needed and it is easy to detect gaps in the overall toolchain for dedicated tasks. Instead of solving these challenges alone, wasting lots of money on the path, the Bosch division together with partners from the industry but also from other domains builds up a community to solve the toolchain challenge together in the OpenADx initiative. With this the partners expect a substantial saving on toolchain costs but also a better integration within organization and especially at the interfaces between cooperating organizations. This talk will present the approach and the current state of the community.

Automated driving solutions introduce a new complexity into the development of embedded systems in a car. This complexity rises with each level of control and autonomy of the automated driving systems. New tool categories have to be added like machine learning, but also existing technologies, like simulation, are stretched to their current limits. E.g., it is expected that the validation of a fully automated driving solution requires to do test drives in the amount of millions of kilometers. This results in the need for very complex simulation as part of the validation as well as the handling of extensive amounts of data in order to ensure quality at a realizable effort.



The toolchain for such challenges is complex and the integration of all the tools coming from different domains cost a lot of effort without a real competitive advantage towards the automated driving solution. Therefore, the Bosch Automated Driving division, together with Microsoft, is currently building up an ecosystem of companies within the industry including OEMs, tier 1 suppliers, tool vendors, research organisations but also partners from other industries like the IT industry. The intiative called OpenADx is supported by the Eclipse Foundation as a host for the activities.



The goal of this endeavor is to stop wasting money on the introduction of a proprietary toolchain in each of the companies and to share the development costs for the toolchain with partners from the industry. Besides the benefit of sharing the costs, the expected result is a better integration of the toolchain within the organizations, but especially also at the interface between cooperating organizations. For tool vendors and research organizations, the advantage of the approach is in the existence of an integration backbone which allows the provider to easily integrate new technology or tools into a working environment that runs in a multitude of customer organizations, instead of providing proprietary solutions for single customers.



The goal of the initiative is not to replace existing tools. There are many tools and technologies, be they commercial or open source, that solve perfectly their job and should do so in the future. The goal is to define development workflows and to support the integration of tools along those workflows as well as to fill gaps identified and not solved by existing tooling. This integration glue will be provided as open source software under the umbrella of the Eclipse Foundation.



The current state of the initiative is that it is currently searching for interested parties throughout the world. The goal is to make this an industry effort not driven by single partners that is big enough to propose and realize an industry standard for automated driving solution development. To get the ecosystem running with only a limited commitment necessary, the initiative starts small. Through a series of so-called Hackfests, i.e., 5 to 10 day Hackathon events, the interested parties get the chance to make first hand experience in the collaborative approach. Additionally, the members get to a common ground on the technologies available and the development approaches at hand in order to identify common integration goals which build the basis for first open source projects spanned from the ecosystem. Based on the experiences of these Hackfests, the members of the initiative will furthermore decide on topics like how to go public officially or how the intiative will be structured.



At the time of the talk, the first Hackfests have taken place in which we will have provided prototypical solutions for the area of closed-loop simulation of automated driving functions with a perfect perception and the handling of massive amounts of data in the toolchain. The results of these Hackfests will be shown in the presentation as well as an outlook on the further activities.

Back

Why I forked my own project and my own company

Home

Speaker Frank Karlitschek
RoomK.1.105 (La Fontaine)
TrackCommunity
Time14:00 - 14:50
Event linkView original entry

This talk will describe the reasons why ownCloud was founded as an open source project. The good and bad things when it was turned into a venture capital founded company, the thing that Frank and the core team does differently with Nextcloud and how the business model, licensing and community relations improved.

Frank Karlitschek founded the ownCloud open source project in 2010 and co-founded a company named ownCloud Inc. late 2012. After being the maintainer for over 6 years and CTO of ownCloud Inc. for over 4 years Frank decided to start over. Leave his own project and company to create a fork called Nextcloud. This talk will describe the reasons why ownCloud was founded as an open source project. The good and bad things when it was turned into a centure capital based company, the thing that Frank and the core team want to do differently with Nextcloud and how the business model, licensing and community relations improved. This talk covers insights into different open source business models and how to create a win win situation for a company and a community.

Back

Sustainability of Open Source in International Development

Home

Speaker Michael Downey
RoomK.1.105 (La Fontaine)
TrackCommunity
Time15:00 - 15:50
Event linkView original entry

Today’s global climate of international development funding cuts, along with growing challenges in sustainability of FOSS projects generally, means we need to focus on co-investment in shared resources for those projects -- the mission of the DIAL Open Source Center.



Duplication of effort, flawed funding models, and overall lack of project maturity has led to the failure of most free & open source software projects in the international development space. In this talk, we'll discuss the new Open Source Center at the Digital Impact Alliance (DIAL), an initiative of the United Nations Foundation. The Center's aim is to help increase those projects' maturity, quality, and reach -- with a goal of advancing an inclusive digital society using FOSS for the poorest places on the planet.

Over the past decade, the international development community has been exploring how the use of modern technology — including tools like the mobile phone, the Internet, as well as free & open source software — can extend the reach of its work. At the same time, these same organizations have struggled to leverage FOSS in an effort to make their work more participatory, sustainable, and effective.



Mainstream software often used in wealthier markets do not always fully meet the specialized needs of international development projects and the areas in which those projects are undertaken. Other fields have demonstrated the FOSS development model as a proven viable model to leverage global collaboration to share costs across institutions, increase the quality of products, and more rapidly innovate.



In the international development community, the results have been mixed. Relatively few of these FOSS projects have endured & matured — when successful, enabling improved and sustained access to information and services that previously were out of reach for marginalized populations. Many efforts have failed, often due to preventable reasons.



These FOSS digital development projects usually struggle with lack of long-term investments in key focal areas such as community effectiveness & product development. Without emphasis of these key considerations, they can rarely match the functionality and quality of their proprietary competitors. As a result, these FOSS projects can’t grow to the more advanced levels of maturity needed for widespread adoption throughout the development field.



In collaboration with partners around the world, the Digital Impact Alliance (DIAL) at the United Nations Foundation is launching its Open Source Center, a multilateral participatory program designed to be a global focal point of FOSS digital development projects. In this talk, you’ll learn:




  1. Why we believe this type of program is key to ensuring the maturity & sustainability necessary for long-term success of digital development FOSS projects,

  2. What we believe are the 4 key pillars to mature tech-for-development FOSS projects, and

  3. The five areas of services which will be provided to participants, and our thoughts on translating our approach to other sectors.




We’ll also discuss our systems for governance, evaluation of participating projects, and financial sustainability strategies for the program.



If you’re interested in cross-project collaboration to advance the impact of free & open source software in a specific area of work, come learn about our plans to transform the international development FOSS space, and share your feedback!

Back

AMENDMENT Community & Business

Home

Speaker Michael Kromer
RoomK.1.105 (La Fontaine)
TrackCommunity
Time16:00 - 16:30
Event linkView original entry

While open source software thrives with a community, a community also thrives from business. Open source is simply a way of doing things, providing transparency and other key factors such as the ability to modify its functionality. This talk provides insight for communities and how an open source project can really develop itself into a profitable project (not just fiscally).

Open source projects do well rooting their funding backends from corporations which deliver enterprise-level support and more. Basically most meaningful software projects in the open source world has some company in the back, delivering true value also back to the project itself.
Based on the example of Kopano and its community as a great community with a true open source company this talk highlights the ways of interaction with the community and how to provide real value to the project you are involved.



Please note that this talk replaces "Love What You Do, Everyday!" by Zaheda Bhorat who has fallen ill. We wish her a speedy recovery.

Back

AMENDMENT So you think you can validate email addresses

Home

Speaker Stavros Korokithakis
RoomK.1.105 (La Fontaine)
TrackMiscellaneous
Time16:30 - 16:45
Event linkView original entry

Too often, developers think they know the right way to validate email addresses, which often leads to bad UX and frustrated users whose legitimate addresses are not accepted. This presentation will show you your true level of email validation skill via a simple and fun quiz, whose accuracy often approaches that of Cosmo quizzes.

You think you know how to validate email addresses? You don't.



Please note that this talk replaces "Love What You Do, Everyday!" by Zaheda Bhorat who has fallen ill. We wish her a speedy recovery.

Back

Let's Fix The Internet

Home

Speaker Martin Bähr
RoomH.2215 (Ferrer)
TrackLightning Talks
Time10:00 - 10:15
Event linkView original entry

The Internet today is plagued by many problems. From viruses and spam, to identity theft and piracy.



We can solve those problems.



With a virtual operating system that runs the cloud, using blockchains to secure identities and data, a virtual network layer to protect agains unauthorized network access, and a virtual machine to sandbox untrusted code.



This talk will describe Elastos, an Operating System for the smart web.



It will explore the approach that Elastos takes to achieve these goals, and gives a vision of a possible future internet.

Elastos is an ongoing open source OS project, which facilitates the new generation of universal apps running anywhere, such as in AR/VR headsets, IoT gateways, game consoles, phones, PCs, TVs, and cloud servers (see Windows 10 UWP). Programmers can use any of three kinds of languages to develop applications: C/C++, Java and HTML/JS. Elastos is different from an Android-like OS in at least four aspects:




  1. Elastos has a complete set of novel C/C++ APIs and frameworks, which correspond to the Java APIs and frameworks of Android. With better performance and a smaller footprint, Elastos is a better fit for embedded systems and machines with wireless peripherals. Elastos also supports almost all Android Java and JS APIs and frameworks. POSIX APIs are deprecated.


  2. Elastos has a distributed OS runtime to guarantee end-to-end security and integrity across the Internet. With built-in metadata-driven reflection technology, Elastos can automatically generate code to bridge programming modules across languages and machine boundaries. In other words, applications, services and IoT devices are prohibited from sending/receiving network packets directly, in order to fence off network attacks initiated from third party software and hardware.


  3. Elastos runtime has a pioneering, service-oriented architecture, designed ideally for containers/virtual-machines. An Elastos runtime can be thought of as a CppVM (vs. JavaVM) without a leaking bottom, i.e., there are no Java-Native-Interface (JNI) equivalent mechanisms to expose the underlying physical machine or host OS. This prevents the possibility of malicious code penetrating into the system layer.


  4. Elastos is decentralized across the Internet, and utilizes blockchains to authenticate user IDs, application IDs, as well as machine IDs. To build a flourishing ecosystem, anybody may freely implement their own markets, social apps, search engines, location-based services, advertisement agents, and so on, while being rewarded with Elastos coins.



Back

Regular Expression Derivatives in Python

Home

Speaker Michael Paddon
RoomH.2215 (Ferrer)
TrackLightning Talks
Time10:20 - 10:35
Event linkView original entry

Regular expressions are commonly implemented using a backtracking algorithm, or by converting them into a nondeterministic finite automata using Thompson's construction. An alternate approach is to use Brzozowski derivatives to directly generate deterministic finite automata. This talk describes an implementation of Brzozowski derivatives in Python, based on a paper by Owens, Reppy and Turon. This implementation is used for generating efficient Unicode lexers.

Owens, Reppy and Turon [1] describe how Brzozowski derivatives may be used to convert an extended regular expression into near optimal deterministic finite state automaton. They observe that "RE derivatives have been lost in the sands of time, and few computer scientists are aware of them". Despite that, this technique is simple, elegant and straightforward to implement. The author has used Brzozowski derivatives to build a lexer generator in Python that supports Unicode.



The structure of the talk is:




  1. Overview of partial derivatives of regular expressions.

  2. How to deal with large character sets.

  3. Key design decisions for a Python implementation.

  4. Quick tour of the Python code.

  5. Real world results.




[1] Owens, S., Reppy, J. and Turon, A., 2009. Regular-expression derivatives re-examined. Journal of Functional Programming, 19(2), pp.173-190.

Back

Adding performance counters to htop

Home

Speaker Hisham Muhammad
RoomH.2215 (Ferrer)
TrackLightning Talks
Time10:40 - 10:55
Event linkView original entry

Typical userspace process monitoring tools usually show some general metrics like CPU% usage, Memory%, CPU time and so on, which have been around for a long time. In this lightning talk, I will discuss some other performance measurements available in modern systems, like hardware performance counters, and talk about their inclusion in htop, in hopes that these powerful metrics will reach a wider audience.

htop is an interactive process viewer for Unix systems. Typical userspace process monitoring tools, either textual or graphical, usually show some general metrics like CPU% usage, Memory totals and percentages, CPU time and so on. These are essentially the same set of metrics which have been around for a long time, since the days of the original Unix top.



However, newer process metrics have become available over the years, usually "hidden" in more advanced tools such as systemtap and perf-events.



In this lightning talk, I will discuss some of these performance measurements available in modern systems, such as Hardware Performance Counters, and talk about their inclusion in htop, in hopes that these powerful metrics will reach a wider audience.

Back

Emitter: Scalable, fast and secure pub/sub in Go

Home

Speaker Tom Marechal
RoomH.2215 (Ferrer)
TrackLightning Talks
Time11:00 - 11:15
Event linkView original entry

Emitter.io is a real-time messaging service for connecting online devices. It is a scalable, fast and secure pub/sub in Go.

Emitter.io is a real-time messaging service for connecting online devices.
- scalable: Built to handle millions of messages per second and to scale horizontally.
- Fast: Designed to ensure reliable, speed-of-light message delivery and high throughput.
- Secure: Supports TLS encryption, binary messages, expirable channel keys and permissions.
- Open source: Source code is available on GitHub and packaged as a docker container.
- Persistant: Messages can be stored for a period of time and sent to subscribers on demand.
- No more limits: Uses standard MQTT protocol, supports message filtering.

Back

Linux Test Project introduction

Home

Speaker Cyril Hrubis
RoomH.2215 (Ferrer)
TrackLightning Talks
Time11:20 - 11:35
Event linkView original entry

Introduction into the Linux Test Project - large kernel+libc testsuite from the upstream maintainer. I would like to sum up shortly where we came from, what is the current status and where we are heading for the future.

Linux Test Project (LTP) is a large collection of tests related mostly to the Linux kernel and libc. Historically the tests varied a lot in quality and because of that LTP earned quite a bit of bad reputation. However the quality of code and the project as a whole has improved a lot in recent years which is something I would like to present to the wider audience as well as outline the direction where we are heading for the future.

Back

LizardFS and OpenNebula, a petabyte cloud the simple way

Home

Speaker Michal Bielicki
RoomH.2215 (Ferrer)
TrackLightning Talks
Time11:40 - 11:55
Event linkView original entry

A demonstration of the new Open Nebula connector to LizardFS on a PB Filesystem.

Nodeweaver and LizardFS would like to release our common new Open Nebula connector at FOSDEM 2018. To honour the occasion we would like to demonstrate an OpenNebula system running on a 1.4 PB LizardFS, fully integrated and ready for use. The connector will be than released to the community. The demo will be on a live system and show how easy it is to run your own Cloud in the petabyte scale with OpenNebula and LizardFS.

Back

GrimoireLab: free software for software development analytics

Home

Speaker Jesus M. Gonzalez-Barahona
RoomH.2215 (Ferrer)
TrackLightning Talks
Time12:00 - 12:15
Event linkView original entry

The talk will explain how to analyze software development with GrimoireLab. It will show with simple code how easy it is to retrieve data from git, Bugzilla, GitHub, mailing lists, StackOverflow, Gerrit, IRC, Slack, and many other repositories. Then, with the same toolkit, the data will be organized in ElasticSearch indexes, visualized in actionable dashboards, and summarized in reports. Some advanced analysis will also be presented on how to exploit the data using Python/Pandas and IPython/Jupyter Notebooks. The talk will be complemented with interesting insights on real FOSS projects.

Many free / open source software (FOSS) projects feature an open development model, with public software development repositories which anyone can browse. These repositories are normally used to find specific information, such a certain commit or a particular bug report. But they can also be mined to extract all relevant data, so that it can be analyzed to learn about any aspect of the project. This talk will explain the GrimoireLab method for doing that, which is based on organizing all that information in a database, which can be later analyzed. This approach allows for minimal impact on the project infrastructure, since data is retrieved only once, even if it later analyzed many times. It allows as well for efficiency and comfort when mining data for an analysis, since the results are readily available, databases can be shared and replicated at will, and queried them with any kind of tools is easy.



The tools that retrieve information from the repositories are grouped in the GrimoireLab toolset. It includes mature, widely tested programs capable of extracting information from most repositories used by FOSS projects of any scale. Many of them are agnostic with respect to the database used, although currently ElasticSearch is the best supported.



The produced databases can be exploited in several ways, of which two will be explained during the talk: using Python/Pandas to produce IPython/Jupyter Notebooks which analyze some aspect of the project; and using Python to feed a ElasticSearch cluster, with a Kibana front-end for visualizing in a flexible, powerful dashboard.



All these approaches can be used to understand general aspects of the project, such as how efficient are the code review or bug fixing processes, how diverse are contributions to the git repository, or how conversations in mailing lists or StackOverflow are shaped. But they can be used as well to drill down, and analyze the contributions by a certain developer, or the longer code review processes, or the contents of the most lively email and QA threads.



The talk will explain the whole process from data retrieval to visualization, and will show some specific cases of real world use, such as the dashboards produced for Eclipse, OPNFV, MediaWiki and many others. Some of the contents of the talk are described in detail in the online book GrimoireLab Training.



GrimoireLab is on of the systems produced by the CHAOSS Collaborative Project.




Back

Perceval: Software Project Data at Your Will

Home

Speaker Valerio Cosentino
RoomH.2215 (Ferrer)
TrackLightning Talks
Time12:20 - 12:35
Event linkView original entry

Software development projects, in particular Open Source ones, heavily rely on the use of telematic tools to support, coordinate and promote development activities. Despite their paramount value, project data is scattered on the Internet, making them difficult to retrieve, collect, clean, link and analyze, challenging the achievement of insightful analytics for both practitioners and researchers. This talk presents Perceval, a tool able to perform automatic and incremental data gathering from almost any tool related with contributing to Open Source development (e.g., source code management, issue tracking systems, mailing lists, forums). It hides the technical complexities related to data acquisition and eases the definition of analytics. Perceval is an industry strong free software tool that has been widely used in Bitergia, a company devoted to offer software analytics of open source software projects.

The arise of the Internet has radically changed how software is being developed. Over the years, platforms like GitHub, StackOverflow and Slack have became important tools to support, coordinate and promote the daily activities around software. This is specially true for Open Source projects, which rely heavily on distributed and collaborative development.



Beyond being successfully and increasingly adopted by both end-users and development teams, these telematic tools offer relevant data sources, which can be exploited by practitioners and researchers to describe, predict, and improve specific aspects of software projects.



However, accessing and gathering this data is often a time-consuming and an error-prone task, that entails many considerations and expertise. It may require to understand how to obtain an OAuth token (e.g., StackExchange, GitHub) or prepare storage to download the data (e.g., Git repositories, mailing list archives); when dealing with development support tools that expose their data via APIs, special attention has to be paid to the terms of service (e.g., an excessive number of requests could lead to temporary or permanent bans); recovery solutions to tackle connection problems when fetching remote data should also taken into account; storing the data already received and retrying failed API calls may speed up the overall gathering process and reduce the risk of corrupted data. Nonetheless, even if these problems are known, many practitioners tend to re-invent the wheel by retrieving the data themselves with ad-hoc scripts.



This talk introduces Perceval, a tool that simplifies the collection of project data by covering more than 20 popular tools and platforms related to contributing to Open Source development, thus enabling the definition of software analytics. Perceval is an industry-strength tool, that (i) allows to retrieve data from multiple sources in an easy and consistent way, (ii) offers the results in a the flexible JSON format, and (iii) gives the possibility to connect the results with analysis and/or visualization tools. Furthermore, it is easy to extend, allows cross-cutting analysis and provides incremental support (useful when analyzing large software projects).

Back

Are distributions still relevant?

Home

Speaker Sanja Bonic
RoomH.2215 (Ferrer)
TrackLightning Talks
Time12:40 - 12:55
Event linkView original entry

With flatpaks, snaps, and more - do we even still need distributions? What makes a distribution and how can package maintainers reconcile the fact that they have learned the hard ways of packaging for their distribution with the ease of use of distribution-agnostic packages?

This talk covers distribution-agnostic "apps", whether they are a good idea, why there is often such a backlash against them, and who benefits the most from them.

Back

FreeBSD : pkg provides

Home

Speaker Rodrigo Osorio
RoomH.2215 (Ferrer)
TrackLightning Talks
Time13:00 - 13:15
Event linkView original entry

pkg-provides is a plugin for querying which package provides a particular file,
we use this example to introduce peoples to the art of writing plugins to FreeBSD pkg tool.

Back

Wrap it Up! Packaging from Pots to Software

Home

Speaker Gordon Haff
RoomH.2215 (Ferrer)
TrackLightning Talks
Time13:20 - 13:35
Event linkView original entry

Big changes are well underway in how software gets packaged and delivered. But even relatively modern takes on packaging goods and services for sale and consumption go back hundreds of years. In this talk, I'll take you on a whirlwind tour of packaging. What forms has packaging taken? What problems were being solved? What kinds of trade-offs need to be made? What lessons can we learn? How has packaging evolved from the utilitarian to the experiential?



And, critically, the trade-offs between prescriptive proprietary bundles and the sort of open-ended unrestricted choice that open source software can enable.

Back

Vis Editor: Combining modal editing with structural regular expressions

Home

Speaker Marc André Tanner
RoomH.2215 (Ferrer)
TrackLightning Talks
Time13:40 - 13:55
Event linkView original entry

The vis editor extends vi's modal editing with built-in support for
multiple selections and combines it with sam's structural regular
expression based command language and Lua scripting capabilities. The
intention is not to be bug for bug compatible with vi(m), instead we
aim to provide more powerful editing features based on an elegant design
and clean implementation.

Back

Viva, the NoSQL Postgres !

Home

Speaker Oleg Bartunov
RoomH.2215 (Ferrer)
TrackLightning Talks
Time14:00 - 14:15
Event linkView original entry

PostgreSQL is the first relational database, which recognized the need of non-atomic data types to support developers of applications from science to Web. Jsonb in Postgres is the attractive feature for modern application developers, who want to work with json documents without sacrificing a strong consistency and ability to use all the power of proven relational technology.

Finally, SQL world has recognized the NoSQL and released the new SQL-2016 standard, which includes specification of SQL/JSON data model and path language, as well as SQL commands for storing, publishing and quering JSON data. I will present the implementation of this standard in Postgres based on existing jsonb data type and discuss some future extensions. I will also present the results of well recognized in NoSQL world the YCSB benchmarks for PostgreSQL, MongoDB.

Back

AMENDMENT Designing a Libre Embedded / Mobile RISCV64 SoC

Home

Speaker Luke Kenneth Casson Leighton
RoomH.2215 (Ferrer)
TrackLightning Talks
Time14:20 - 14:35
Event linkView original entry

Please note that this replaces the talk by Rik Lempens on SiriDB - Time Series Database



In reaching out to the India Shakti RISC-V team the opportunity presented itself to put a proposal to them of a mobile / embedded RISC-V 64-bit SoC that would meet their requirements: low cost, libre, and suitable for four markets: smartphone, tablet, laptop / netbook and embedded industrial purposes. Six years ago the author attempted to create an SoC pinmux: it took over two months. Learning from that experience and instead writing a python program to represent the pinouts, adding new test scenarios instead took about an hour each, including altering the pinouts to best match the new scenario whilst still maintaining access to functions needed for all other scenarios as well. Assuming the market assessments were correct, the design - which only requires a 300-pin BGA package - can be said to have been proven to successfully meet all four target markets. Crucially with this approach, potential customers can be approached with the preliminary design, for their input and feedback before committing huge sums to design and tape-out actual silicon.



The output of the python program is a simple markdown page that can be used, without alteration, as the Reference Documentation should the SoC ever be created. In addition to the pinouts, the source of the design inspiration and guides utilised in the design has also been documented, so as to provide not just Reference Schematics and parts that are easily available in the Shenzhen / China markets, but also a logical justification for the actual choice of interfaces in each of the target usage scenarios. Ultimately, it helps explain why low-cost embedded and mobile-class processors are designed the way that they are.



Also included is links to numerous sources of BSD-licensed and compatible LGPL-licensed hard macros (VLSI / VHDL) such as the ORSOC Graphics Accelerator, so that, if utilised, apart from the DDR3 hard macros (which the Shakti team plan to implement under a libre license) the entire SoC's full hardware-design source code is completely available under Libre Licenses. With absolutely no proprietary firmware required whatsoever it has the potential to be one of the world's first ever mass-volume mobile-class quad-core 28nm 2.5ghz embedded SoCs that will be entirely libre right down to the bedrock.

Back

NoSQL Means No Security?

Home

Speaker Philipp Krenn
RoomH.2215 (Ferrer)
TrackLightning Talks
Time14:40 - 14:55
Event linkView original entry

New systems are always interesting targets since their security model couldn’t mature yet. NoSQL databases are no exception and had some lurid articles about their security, but how does their protection actually look like?

We will take a look at three widely used systems and their unique approaches:




Back

Your Build in a Datacenter

Home

Speaker Jakob Buchgraber
RoomH.2215 (Ferrer)
TrackLightning Talks
Time15:00 - 15:15
Event linkView original entry

Projects such as ccache and distcc drastically lower the duration of C/C++
builds by caching and distributing the work on many servers. These programs are
focused on C++ or C-like languages. Bazel, Google's open source build system, is
multi-language and has been used with a distributed caching and execution service
for almost ten years inside of Google. Recently Google and others have built on
that experience to design an open-source remote caching and execution system for
Bazel.



This talk will introduce remote caching and execution in Bazel, and the BuildFarm
project, a distributed remote caching and execution backend that is open source and
available for everyone to use.

Back

Enroll 2FA to thousands of users with privacyIDEA

Home

Speaker Cornelius Kölbel
RoomH.2215 (Ferrer)
TrackLightning Talks
Time15:20 - 15:35
Event linkView original entry

privacyIDEA is an Open Source Multi Factor Authentication System. It supports a wide variety of 2nd factor types like Smartphone Apps, key fob tokens, U2F, YubiKeys, Nitrokeys but also managing SSH keys and x509 certificates.
Important features are several simple ways to automate processes and thus easily enroll, personalize or revoke authentication object in existing workflows.

privacyIDEA is a flexible two factor solution, which can integrate into any network.
Users are read from any user repository like flat files, SQL databases, LDAP or Active Directory.



REST API



privacyIDEA runs as a central authentication server in your network. All actions can be accessed via a REST API.



E.g. to enroll a Smartphone App like the Google Authenticator an administrator would have to issue an authentication request to
receive an authorization token:



http POST https://your.privacyidea.com/auth username=administrator password=********


Then the administrator you enroll a token:



http POST https://your.privacyidea.com/token/init serial=123456 genkey=1 type=totp authorization:<authorizationtoken>


The request would aleady return a QR Code image to be scanned with the smartphone.



Of course privacyIDEA provides a modern UI based on bootstrap and Angular, but this API already gives you an idea about the possibilities of automation.



Event Handler



In addition to this privacyIDEA comes with extremly flexible event handler framework.
This allows the administrator to hook new actions to any event. These actions may only trigger based under certain conditions.
Triggered actions can be notifications, any kind of token events, federations with other privacyIDEA instances or any arbitrary shell script receiving certain parameters from privacyIDEA.



The talk will give you an idea, how you can use the privacyIDEA event handler framework to add privacyIDEA and 2FA management in your existing automated processes.

Back

The future of Sympa

Home

Speaker Marc Chantreux
RoomH.2215 (Ferrer)
TrackLightning Talks
Time15:40 - 15:55
Event linkView original entry

sympa is just 20 years old, it has a world wild large base and a small community of developpers. as a member of the community, i will explain the features that makes sympa worth to be actively developped and what are the next steps.

Back

Static Infrastructure Status with Jekyll and GitHub Pages

Home

Speaker Carsten Thiel
RoomH.2215 (Ferrer)
TrackLightning Talks
Time16:00 - 16:15
Event linkView original entry

Infrastructure services fail – At least from time to time.
Providing users with a central point of information is vital in particular when your usual ways of communication (e.g. your website) are broken.



While there are dedicated services, it can also be done using Jekyll and its data and collections features on GitHub Pages.

Infrastructure services fail – At least from time to time.
Providing users with a central point of information is vital in particular when your usual ways of communication (e.g. your website) are broken.
When disaster strikes, updates must be published quickly and with little effort.
Credentials must be known, templates available and the full service inventory and its interdependencies externally documented.



While there are dedicated services, it can also be done using Jekyll and its data and collections features on GitHub Pages.
This Lightning Talk will explain how we implemented the approach for the DARIAH-DE Research Infrastructure services.

Back

Snabb - A toolkit for user-space networking

Home

Speaker Diego Pino
RoomH.2215 (Ferrer)
TrackLightning Talks
Time16:20 - 16:35
Event linkView original entry

Snabb is a toolkit for developing user-space network functions. A network function (filtering, NAT, encapsulation) is any program that manipulates network traffic. Snabb eases the effort to write such programs. Snabb fits in the category of user-space networking. Snabb by-passes the Linux kernel talking directly to the hardware. This makes Snabb a very convenient tool for high-performance networking. Unlike other user-space toolkits such as DPDK or VPP, Snabb is entirely developed in Lua which significantly lowers the adoption barrier.



In this talk I introduce the Snabb toolkit. Through real-world examples you will learn how Snabb works and even how to start prototyping your own network functions.

Back

FOSDEM Infrastructure Review

Home

Speaker Richard Hartmann
RoomH.2215 (Ferrer)
TrackLightning Talks
Time16:40 - 16:55
Event linkView original entry

FOSDEM infra review

Back

Introduction to the decentralized internet part

Home

Speaker Tristan Nitot
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time09:00 - 09:15
Event linkView original entry

Welcome to the Decentralized Internet Devroom (DID)



This is the second edition of this Devroom, which has been extended its topic to include privacy-related projects.



During this session, we'll discuss two topics:



1 - Why do we need do redecentralize the Internet?
2 - How and with who will we do it?

Back

Can we measure the (de)centralisedness of the Internet with RIPE Atlas?

Home

Speaker Emile Aben
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time09:15 - 09:30
Event linkView original entry

We are looking into how RIPE Atlas probes in networks with the majority of end-users for a given country.
We measure the path between them and infer if these networks are directly connected or not.
With this information we can estimate how centralised the Internet is for a given country (in terms of who has access to user2user IP communication channels)
This can be useful information for people who want to develop a more decentralised internet.

(see abstract)

Back

Get your decentralized project some EU funding

Home

Speaker hellekin (how)
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time09:30 - 10:00
Event linkView original entry

Free software projects for decentralization have been plagued with a lack of funding. We set up the Public Universal Base: Libre Infrastructure Consortium to respond to EU calls and bring funding to our community. We'll show you how to participate in the project and get some funding for your free software social media project.

For over five years, a group of people have been working on ways to amplify and coordinate action of free software projects in Europe in the field of decentralized and distributed social media. From the P2P Meeting in Berlin, 2012, to the Umbrella project that gave birth to the Center for Cultivation of Technology, we've made serious advances towards our goal to find sustainable ways to promote and fund critical free software projects for the Federation, P2P social media, and privacy-by-design projects.



It's now time to bring EU funding to free software! With H2020/ICT-28 EU call for "Future Hyper-connected Sociality", we have a serious opportunity to bring funding to our community. We've setup the Public Universal Base: Libre Infrastructure Consortium (PUBLIC) to respond to ICT-28. PUBLIC is an EU Consortium to promote adoption of free software in Europe to build a lasting public infrastructure.



In this presentation we propose to introduce the EU consortium and our goals, and to explain how free software projects can participate to boost their finances. Via a procedure of "Open Calls", PUBLIC will be able to distribute funding to selected projects. We'll tell you how to best respond to those calls. We'll ask from you how to best frame the project to respond to your needs.



Outline:
- What is ICT-28
- What is PUBLIC & PUBLIC's goal
- Open Calls & how to respond to them
- Questions

Back

Urban places as nodes of a decentralized Internet

Home

Speaker Panayotis Antoniadis
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time10:00 - 10:30
Event linkView original entry

The MAZI toolkit, http://mazizone.eu/toolkit, is a DIY networking toolkit for installing, customizing, and deploying self-hosted applications on a Raspberry Pi as local web server (in a sense, a YunoHost focusing on small-scale community networks and a PirateBox opening up to self-hosted applications) that can help to create a wide variety of local networks effectively and democratically designed and governed by local communities. Such a tool can bring closer together self-hosted software and community networks, making both more accessible to a wider audience. Openki, http://openki.net, is a FLOSS platform for self-organized education that facilitates the creation of collaborative learning groups in the city. In this talk I will explain how these two can be combined to help build a decentralized Internet anchored in urban spaces and facilitate learning and trust building processes that are necessary for privacy, among others.




A related high-level talk is available online here: https://www.youtube.com/watch?v=G_fMfX7oigQ.

Back

The Generic Data Distribution System of the Retroshare Network

Home

Speaker Cyril Soler
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time10:30 - 11:00
Event linkView original entry

Because of their static topology and limited connectivity friend to friend
networks represent a challenge in the distribution of data. We present GXS: a
generic data distribution system suitable for friend-to-friend networks, that
is in the heart of the Retroshare software. The presentation will start with
a short overview of the Retroshare software. We will then cover the
technical aspects of GXS, its architecture and concepts, and will describe a few existing decentralized services based on this system. Finally, a call to developers will be made, based on a generic example, while proposing multiple ideas of distributed services to be developped on top of it.

The detailed plan for the presentation will be the following.
Each line indicates the approximate cumulated time duration and number of slides.
The presentation is tuned to approximately 20 mins.



A - overview of the Retroshare platform
B - the GXS system



    - overview of the system and data distribution paradigm
- technical description: services, groups, messages, circles.
- spam control
- file transfer


C - open problems and call for contributions



    - step by step example of creating a service
- ideas of GXS services


D - conclusion+contacts (1 min, 1 slide)

Back

Ring as a free universal distributed communication platform.

Home

Speaker Sébastien Blin
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time11:00 - 11:30
Event linkView original entry

Ring, a free universal distributed communication platform gets its first stable version in July 2017. This project is based on OpenDHT, which manages communications through a p2p network. In this talk, after a short demonstration, we will describe what is the state of the platform now, what is new since the last FOSDEM, and what will be embedded during the next months.

Ring is a free, universal and distributed communication platform respecting user freedoms and privacy developed by Savoir-Faire Linux in Montréal. Unlike a lot of communication technologies, this project works without any central server and uses a peer-to-peer network transport library which can be used by any application: OpenDHT.



In 2017 Ring has seen a lot of improvements, and gets its first stable version last July, which provides a lot of new features . The project is available on a lot of platforms (Linux, BSD, Windows, UWP, MacOs, Android and iOS) and also supports SIP calls. During the last year, the communication reliability has been improved as well as the user experience. The OpenDHT library was enhanced in order to function via a proxy which offers the ability to easily work with HTTP requests.



Ring and OpenDHT's developers describe how this technology works. What problems they encountered and what solutions are implemented.
After a demonstration of Ring, developers will explore several questions:






Finally, the talk goes beyond to look at the 2018's roadmap and to describes the work planned for improving the user experience and the possibilities for Ring.

Back

Building Decentralised Communities with Matrix

Home

Speaker Matthew Hodgson
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time11:30 - 12:00
Event linkView original entry

Over at Matrix.org we've spent a lot of the last year refining the Matrix protocol for open, secure, decentralised communication to make it more usable for larger scale usage. One of the major recent additions has been the ability to group together sets of users and rooms into 'Communities' - equivalent to Slack Teams or Discord Servers, which give a way for existing projects and communities to give their users a much more focused and friendly environment for decentralised communication with Matrix. In this talk we'll explain how Communities (aka Groups) work, how they're implemented, and how FLOSS projects in particular are using them and other recent features to escape the centralised tyranny of proprietary alternatives!

Everyone should be painfully familiar with the sinking feeling of their favourite online community (be that FLOSS or any other topic) fragmenting and getting trapped in proprietary communication technologies. Matrix exists to defragment these silos and provide an open standard with open source implementations as a decentralised alternative. One of the main improvements over the last year has been the introduction of Communities - a new feature in Matrix which provides a much-needed ability to define decentralised sets of users and groups alongside other community metadata (profile page, avatar, etc) to create a friendly home in Matrix for existing organisations of any kind. This makes it way easier to migrate from proprietary silos into Matrix for communication and collaboration within a community, as rather than users being thrown head first into the open ocean of Matrix, anyone can now produce curated landing pages for given communities which users can participate in... without having to constantly sign up for new accounts or having communities locked into proprietary platforms. Meanwhile bridges lets Matrix connect through to IRC, Slack, Discord, Telegram and others to fix the fragmentation problem. We'll be showing off how folks like NextCloud, OpenSource Design, Status.im and Cosmos are using Communities already, and how they work under the hood.



Meanwhile, lots of other stuff has landed in Matrix this year focused on improving usability: entirely new UX for managing end-to-end encryption; all new native desktop clients (e.g. Nheko from the community); the addition of Widgets to embed arbitrary webapps into Matrix rooms; integrating with Jitsi for video conferencing and more! Alongside Communities we'll show off the latest stuff and demonstrate how Matrix clients like Riot are becoming an increasingly viable open source alternative to Slack and friends.

Back

The emPeerTube strikes back

Home

Speaker Luc Didry (Framasky)
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time12:00 - 12:30
Event linkView original entry

Presentation of PeerTube, a decentralized video streaming platform using P2P (BitTorrent) directly in the web browser and ActivityPub to federate servers.



Attended audience is technical people, or decentralization enthousiasts.

Today, you can't offer an real alternative to Youtube: costs for storage or Internet bandwidth would be so huge that only a few societies can afford it.
Some free softwares exist to host your own videos but they don't address the money issue when too much videos are uploaded to your server, or if a video goes viral.



That's where PeerTube comes up! Servers are federated with ActivityPub, a federating protocol built by the W3C (already used in Mastodon) and they send the videos to web browsers with the help of WebTorrent (Bittorrent over WebRTC).
The videos are indexed accross federated servers so people can watch a video from instance A on the instance B's web interface. Here's the disk storage cost-killer: just upload your videos on your own instance or one you trust!
When viewers are watching the same video at the same time, thanks to WebTorrent, you don't get the videos only from the instance which hosts the video but from the other watchers too. And that's for the bandwith issue.



We will present the project, its technical background and its future developments.



Outline of the talk or goals of the session:




Back

Contributopia

Home

Speaker Christophe Lafon-Roudier (Pouhiou)
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time12:30 - 13:00
Event linkView original entry

For the past 3 years, the French non-profit Framasoft has been leading a campaign to "De-google-ify Internet", hosting 32
free-libre alternatives to Google (and GAFAM) services, educating users to change their digital habits and creating a
French network of free-libre and ethical services hosters. In October 2017, Framasoft launched a new campaign: Contributopia. 12 actions over 3 years sharing the same goal: let's create the digital tools to equip the "contributors society" in order to interest them into contributing to free-libre and open-source software.

Over 3 years of campaigning to De-Google-ify Internet (and doing it), we learned a few lessons:






We concluded that proposing 30 free-libre and data-friendly alternatives to the most used web-services (during the 2014-2017 Degoogleify Internet campaign) was not enough... This proof of concept (used today by about 400 000 monthly French-speaking users) is just a start.



Here is our plan for the 3 years to come: include users in the making of the most politically (and technically)-sensitive web services, document our experimentations and meet our peers so they can be adapted and reproduced. And finally imagine and create the tools to facilitate access to open knowledge and digital independency.



This won't happen if we don't go and build bridges between the free-libre software communities and all the people who share these ethics in other domains. There are new worlds to explore, and we hope more and more will join us on this journey.

Back

Peeling onions: understanding and using the Tor network

Home

Speaker Silvia Puglisi (Hiro)
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time13:00 - 13:25
Event linkView original entry

Tor is an important tool providing privacy and anonymity online. The property of anonymity itself is more than just providing an encrypted connection between the source and the destination of a given conversation. There is in fact a lot of information that can be still learned by just observing encrypted communications. Anonymity is a broad concept, and it can mean different things to different groups. The main advertised property of the Tor network is that it provides strong anonymity given a variety of people using the network. The Tor network itself is only a part of what Tor is. Tor also provides privacy at the application level through the Tor Browser. This talk is going to present what Tor is and how it works. We are also going to present new features we have been developing lately. Finally we are going to explain how you can build applications that use Tor.

Tor is an important tool providing privacy and anonymity online. The property of anonymity itself is more than just providing an encrypted connection between the source and the destination of a given conversation. Encryption only prevents the content of the communication between Alice and Bob from becoming known.



There is in fact a lot of information that can still be learned by just observing encrypted communications. For example, it is always possible to guess certain information by learning some properties of the conversation beyond just the content, such as the length of the conversation, or who was involved, or even guessing a group of people that communicate with a certain frequency. These properties are called metadata and can be used to describe information even when the full data is not available.



Anonymity is a broad concept, and it can mean different things to different groups. The main advertised property of the Tor network is that it provides strong anonymity given a variety of people using the network. For the Tor network to function properly and to satisfy users' needs, we need a certain degree of diversity. We need diversity in the nodes comprising the network and in the user population sending traffic through it. Lately, we have been introducing new traffic scheduling features in the network in order to solve problems, reduce congestion and improve overall performance.



The Tor network itself is only a part of what Tor is. Tor also provides privacy at the application level through the Tor Browser.



Other applications can also make use of the Tor network to be more secure. In this case Tor can provide bi-directional anonymity by making it possible for users to hide their locations while offering various kinds of services, such as web publishing or an instant messaging server. The next generation onion services have been recently launched in alpha and we are excited to touch on some of the new features that have been introduced on the old hidden service design.

Back

Anonymous Whistleblowing with SecureDrop

Home

Speaker Jennifer Helsby (redshiftzero)
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time13:30 - 13:55
Event linkView original entry

This session will introduce SecureDrop, a free and open source whistleblowing platform. We will describe how it addresses the critical need for a way for journalists and sources to communicate securely and anonymously. Many large news organizations including the Associated Press (AP), the Guardian, the Washington Post and the New York Times are all now running SecureDrop in their newsrooms to preserve an anonymous tip line in the presence of increasing surveillance powers by governments and corporations. We will describe how SecureDrop works, how you can install it, and how you can contribute to the project.




Talk will be 20 minutes with 10 minutes questions

Back

The Invisible Internet Project

Home

Speaker Andrew Savchenko (bircoph)
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time14:00 - 14:25
Event linkView original entry

In this talk I will discuss what is the Invisible Internet Project (I2P), why I2P network is needed, will provide an insight on how the I2P network works, how it is different from Tor. Some use cases and safety tips will be discussed as well as discussion on some new features.



Intended audience: everyone interested in I2P.


Back

Encrypted communication for mere mortals

Home

Speaker kwadronaut
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time14:30 - 14:55
Event linkView original entry

The LEAP Encryption Access project is dedicated to giving all Internet users access to secure communications. Our focus is on adapting encryption technology to make it easy to use and widely available. Not only end users deserve useable programs, the barriers to entry for aspiring service providers are pretty high. LEAP's goal is to transform the existing frustration and failure into an automated and straightforward process.

We are currently working on two services:






In this talk we'll give:




Back

Improving the SecureDrop system architecture

Home

Speaker Eric Hartsuyker (heartsucker)
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time15:00 - 15:25
Event linkView original entry

SecureDrop is designed to address a threat model including nation state adversaries. Journalists and system administrators understand the tradeoff and are ready to make significant efforts because usability is difficult to improve without sacrificing security. We explore novel approaches to deploy the recommended hardware that significantly improves usability without compromising security.

When employees of an intelligence agency follows Snowden's footsteps, they boot tails in a coworking space and submit to the SecureDrop Tor Hidden Service. SecureDrop is the only whistleblower framework that addresses a threat model with a nation state adversary. The journalist then uses one machine to get the documents from the server. And moves them to an airgap machine to decrypt and read the documents. The server itself is made of two machines, one of them running the Tor Hidden Server and the other running OSSEC to monitor it. They sit behind a carefully configured firewall and are controlled from an admin workstation running Ansible.



The physical separation between machines is an essential part of the SecureDrop security. They are non trivial to setup and use for both the journalist and the system administrator of the news organization. We will present alternatives designed to improve the usability for both of them while preserving a reasonable level of security.

Back

Measuring security and privacy on the Web

Home

Speaker Tobias Mueller
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time15:30 - 15:55
Event linkView original entry

PrivacyScore.org (in public beta since June 2017) is an automated website scanning platform that allows anyone to investigate websites for privacy and security issues. Users can use PrivacyScore to compare related websites (e.g., of all political parties in a country). We will present insights from running the platform, interesting results, and discuss future plans for the platform with the audience.

We present our approach for making the Web a safer place: by making privacy invasions and security mishaps more transparent to users, web site operators, and data protection authorities. This lead us to the creation of PrivacyScore. PrivacyScore is a website scanning platform that simplifies the process of comparing security and privacy aspects of websites. PrivacyScore focuses on lists of websites, while existing scanners such as Webbkoll, Mozilla Observatory, Track the Trackers by Fraunhofer SIT, securityheaders.io, etc. focus on single sites. Furthermore, PrivacyScore is non-commercial and available as open source software. All recorded data is made available publicly for research purposes.



We believe that public benchmarks are a useful tool to improve security and privacy in the long run. On the one hand, such benchmarks can help with raising awareness of users, on the other hand a benchmarking platform like PrivacyScore can be of use for data protection agencies that want to or have to audit content providers in their jurisdiction, which will become more widespread in 2018 with the European general data protection regulation. The generated datasets are also of value for researchers: For instance, we are interested in analyzing whether public "blaming and shaming" poor performance within a peer group of sites creates an incentive for site operators to implement additional security and privacy measures.



In our talk we will present insights on the effectiveness of the public shaming approach that we have gained from running PrivacyScore in public since June 2017. We (and other users) have already created several lists to analyze security and privacy aspects of more than 18,000 sites. In some cases we were able to observe how web site operators react when they learn how their site ranks in comparison to their competitors.

Back

CryptPad

Home

Speaker Caleb James DeLisle
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time16:00 - 16:25
Event linkView original entry

CryptPad is the world's first web based realtime collaborative editing platform where the server never sees the plaintext content. The realtime synchronization happens entirely in the client and the content is encrypted so that the server cannot read it. This lecture will present the encryption and key-management model, the sandboxing architecture which prevents a majority of CryptPad code from accessing the secret keys and some research into a solution to prevent backdoors in the javascript sent from the server. There will also be a discussion of how to reuse the CryptPad architecture (and source code) in developing other security/privacy conscious webapps.

CryptPad was created to show the world that end-to-end encryption can be as easy to use as any ordinary webapp. CryptPad allows anonymous collaboration similarly to Etherpad but the URL contains a key after the # (which is never sent to the server). CryptPad also allows registration and then organization of pads in a "CryptDrive" which is itself encrypted using a key derived from the username and password. The result of this is no plaintext content is ever sent to the server.



Because the server is unaware of any of the content which is being edited, all operational transformation and realtime synchronization must happen on the client side. This was achieved using a library called ChainPad which uses a Merkel DAG (simplified block chain) in order to seek consensus on the official version of the document without the server's help. The server is little more then a dumb message relay.



However, one still must trust the validity of the javascript which is hosted on the server. In order to mitigate the risk of javascript vulnerabilities leaking the secret keys from CryptPad, the browser's Cross Origin Policy is leveraged with a novel use of cross-domain iframes. Attacks are also limited using Content Security Policy, a new feature which is present of modern browsers.



Still, the integrity of the javascript sent by the CryptPad server remains of great importance. Cryptography software is traditionally installed as an application which theoretically receives independent code review at each release. Unfortunately this defeats the simplicity and ease of adoption of a webapp. We are studying a possible solution using notarized code signing with a browser extension to validate it.



CryptPad is a research and development effort to explore new ways of improving security in collaborative web applications while maintaining the usability that people demand. This architecture is not confined to use only for realtime editing, it can be extended to many types of application. When you design the next privacy-conscious technology consider whether you should be requiring your users to install it.

Back

Servers can't be trusted, and thanks to tamper-proof journals EteSync doesn't need to!

Home

Speaker Tom Hacohen
RoomH.1301 (Cornil)
TrackDecentralised Internet and Privacy
Time16:30 - 16:55
Event linkView original entry

Servers can't be trusted! Whether it's because of a malicious company, a rogue employee, a government agency, a random hacker or malware, your data is not safe if it's just sitting there exposed on a server.



Luckily, there are ways to mitigate some of the threats, for example by making your server more secure, using a server you trust (self-host) and using end-to-end encryption, so the server doesn't have access to this information. While these are great, the server can still successfully manipulate your data, this is where tamper-proof journals come into play and help reducing even that.



In this talk Tom will show some of the threats your data is facing on servers even when self-hosting and using end-to-end encryption, explain what tamper-proof journals are, how they can mitigate these threats, and how they are used by EteSync to better secure your data.

Audience:




Back

Installing software for scientists on a multi-user HPC system

Home

Speaker Kenneth Hoste
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time09:00 - 09:25
Event linkView original entry

Before scientists can use HPC systems for their research, they need to get the tools and applications installed that they require.



Until recently, this was a (perhaps surprisingly for some) painful process,
especially for scientists that lack sufficient experience with compiling software and dealing with dependencies.



Recently, several projects have emerged that aim to facilitate this process, each of which with a particular focus:
performance, flexibility, reproducibility, ease of use, support for multiple platforms, etc.



In this talk, I would like to present an objective comparison of the different tools that are most prevalent currently, including:






Although I intend to focus on the use case of installing (scientific) software on multi-user HPC systems, I will also highlight particularly interesting features that fall outside that scope.

Back

Binary packaging for HPC with Spack

Home

Speaker Todd Gamblin
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time09:30 - 09:55
Event linkView original entry

Spack is a package manager for cluster users, developers, and
administrators, rapidly gaining populartiy in the HPC community. Like
other HPC package managers, Spack was designed to build packages from
source. However, we've recently added binary packaging capabilities,
which pose unique challenges for HPC environments. Most binary
distributions assume a lowest-common-denominator architecture,
e.g. x86_64, and do not take advantage of vector instructions or
architecture-specific features. Spack supports relocatable binaries for
specific OS releases, target architectures, MPI implementations, and
other very fine-grained build options.



This talk will introduce binary packaging in Spack and some of the open
infrastructure we have planned for distributing packages. We'll talk
about challenges to providing binaries for a combinatorially large
package ecosystem, and what we're doing in Spack to address these
problems. We'll also talk about challenges for implementing relocatable
binaries with a multi-compiler system like Spack. Finally, We'll talk
about how Spack integrates with the US exsascale project's open source
software release plan, and how this will help glue together the HPC OSS
ecosystem as a whole.

Back

Tying software deployment to scientific workflows

Home

Speaker Ludovic Courtès
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time10:00 - 10:25
Event linkView original entry

Package management, container provisioning, and workflow execution are often viewed as related but separate activities. This talk is about using Guix to integrate reproducible software deployment in scientific workflows.

In HPC, tools usually focus exclusively on one of these aspects: Spack or EasyBuild manage packages, Singularity or Shifter deal with containers, and SLURM, CWL, or Galaxy mostly leave it up to users to deploy their software.



While the initial tooling of GNU Guix is about package management, we have grown it into a toolkit that, broadly speaking, allows developers to integrate reproducible software deployment into their applications—as opposed to leaving it up to the user.



In this talk I will illustrate the benefits of this approach with examples from recent work from the Guix-HPC effort. This ranges from the “guix pack” container provisioning tool, to the Guix Workflow Language (GWL), which incorporates deployment as a key aspect of workflow management. I will discuss how we could make these tools key components of broader reproducible scientific workflows as demonstrated by projects such as ActivePapers, ReScience, or NextJournal.

Back

Combining CVMFS, Nix, Lmod, and EasyBuild at Compute Canada

Home

Speaker Bart Oldeman
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time10:30 - 10:55
Event linkView original entry

One of the challenges in HPC is to deliver a consistent software stack that balances the needs of the system administrators with the needs of the users. This means running recent software on enterprise Linux distributions that ship older software.
Traditionally this is accomplished using environment modules, that change environment variables such as $PATH to point to the software that is needed.
At Compute Canada we have taken this further by distributing a complete user-level software stack, including all needed libraries including the GNU C library, but excluding any privileged components.
I will describe our setup, which combines Nix for the bottom layer of base components, EasyBuild for the top layer of more scientifically inclined components, Lmod to implement environment modules, and the CernVM File System (CVMFS) to distribute it to Canadian supercomputers.
Expected prior knowledge: knowing how to use the command line and environment variables.

Back

Behind the scenes of a FOSS-powered HPC cluster at UCLouvain

Home

Speaker Damien François
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time11:00 - 11:25
Event linkView original entry

With the advent of the DevOps and Infrastructure as Code movements, tools have emerged that allow building a complete HPC solution from scratch based only on open source software. At UCLouvain, one of our clusters, and the services on which it depends, is built on a full FOSS stack. From the operating system, to the deployment tools, monitoring, scheduling, and user software installation, everything is built from open source software that inter-operate gracefully. For instance, for deployment/provisioning, we use a combination of Ansible and Salt which we find work perfectly together even if they are often considered to be mutually exclusive.
This talk will share our experience with making FOSS software co-operate smoothly and will offer our point of view on choosing the right tool for the right job. It will also present some of the contributions we have made to the open source community.

Our software stack for the cluster is based on Slurm, OpenLDAP, Easybuild, Zabbix, with many side-services running on virtual machines in OpenStack. Management is performed with a trilogy of tools: Cobbler, Ansible and Salt, for production, and Openstack and Vagrant for development and staging.



Our contributions to the open source community include somme tools for Slurm, a web wizard for building submission scripts, and a tool that orchestrates the interplay between the LDAP system and the job submission program (Slurm).

Back

How DeepLearning can help to improve geospatial DataQuality , an OSM use case.

Home

Speaker Olivier Courtin
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time11:30 - 11:55
Event linkView original entry

How DeepLearning, and semantic segmentation, can be an efficient way to detect and spot inconsistency in an existing dataset ?
OpenStreetMap dataset took as an use case.

DataQuality is a must, but a gageure.
And any technique on any help to improve DataQuality is then more than welcome.



Machine and DeepLearning can succeed to tackle some old issues, in a far more convenient and efficient way than ever before...
For instance DeepLearning, with aerial imagery semantic segmentation can detect features from an aerial image and allow us to check dataset consistency.



In this presentation we will focus on how an OpenStreetMap subset dataset (for instance roads and buildings on an area),
can be evaluated to produce a quality metric, and to spot areas where obvisously dataset is still far to be complete.



On a DataScience point of view, we wanna focus on:






On a OpenData, point of view we will consider, how this kind of solution could be integrated with OSM project Assurance Quality policy.

Back

Modules v4

Home

Speaker Xavier Delaruelle
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time12:00 - 12:10
Event linkView original entry

Typically users initialize their shell environment when they log in a system by setting environment information for every application they will reference during the session. The Modules project, also referred as Environment Modules, provides a shell command named module that simplifies shell initialization and lets users easily modify their environment during the session with configuration files called modulefiles.



The Modules project has a long history as its development was started in 1991. At that time, the concept of the module command was laid down to dynamically and atomically enable environment configurations during a shell session. Since then, this concept has become a standard practice, especially among the scientific community where people share same computing resources but all have a specific software and version requirement.



After an almost 5-year release hiatus, Modules with its version 4 is back into the environment management game. The intend is to further improve the modulefile standard and the module command capabilities, with proven concepts applied to similar fields like software package management.



After briefly explaining the root concept behind the module command, this talk will cover the major changes between versions 3.2 and 4 at both software and project level. Then focus will be put on some of the recent or upcoming new features:
* virtual modulefiles
* extend the module command at your site
* sharing code across different modulefiles
* dependencies management between modulefiles
* new ways to query or change the environment state



The audience for this talk is anyone who is interested in user environment management. From the system administrator, who has to provide access to a software catalog, to the end-user of shared computing system, who need to juggle with different workloads combining software elements.

Back

Scale Out and Conquer: Architectural Decisions Behind Distributed In-Memory Systems

Home

Speaker Akmal Chaudhri
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time12:10 - 12:20
Event linkView original entry

Distributed platforms, like Apache Ignite, rely on horizontal scalability. More machines in the cluster means greater performance of the application. Do we always get twice the speed after adding the second machine to the farm? Ten times faster after adding ten machines? Is that [always] true? What is the responsibility of the platform? And where do engineers’ responsibilities begin?

In this talk attendees will learn about the compromises and pitfalls architects face when designing distributed systems:



• Advantages and disadvantages of different data-sharding algorithms.
• Effective data models for distributed environments.
• Synchronization and coordination in distributed systems.
• Local scalability issues of speeding up local processing on cluster nodes.

Back

The Magnificent Modular Mahout

Home

Speaker Trevor Grant
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time12:20 - 12:30
Event linkView original entry

Open source big data engines as well as HPC libraries seem to be proliferating at an increasing rate. Technical debt can be incurred with statistical and machine learning algorithms that require a highly specialized knowledge of the algorithm at hand as well as the distributed engine / HPC library which the method has been written against. The Apache Mahout project presents a highly modular stack which introduces levels of abstraction between the mathematical implementation of the algorithm (an R-Like Scala DSL) and the execution of the code. Users are able to interchange Apache Spark, Apache Flink (batch), and H2O distributed engines, as well as ViennaCL for OpenCL on GPU and OpenMP, and CUDA native solvers. Users can also port high level algorithms to new distributed engines or native solvers by defining a handful of BLAS operations.

Audience members will ideally have some concept of distributed engines such as Apache Spark, and a basic under standing of BLAS packs and linear algebra. (Basic understanding of linear algebra meaning they remember that things like matrix times matrix, matrix times vector, matrix transposed, and matrix decompositions are things that exist).



Current research is being done on creating a quantum BLAS pack for Apache Mahout, which will be a prototype of the next generation of High Performance Computing- however trying to even begin to explore the topic of quantum computing in 20 minutes is observed, and so research and progress at the time of the conference will be mentioned in passing only.

Back

Tools for large-scale collection and analysis of source code repositories

Home

Speaker Alexander Bezzubov
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time12:30 - 12:40
Event linkView original entry

There are 10s of millions Git repositories publicly available over the Internet, but what kind of tools would one need to be able to treat all this code as a Big Dataset?
I will talk about new and existing OSS tools that were built and used, in order to allow collection and analysis of millions of Git repositories on commodity hardware clusters.

Back

Slurm in Action: Batch Processing for the 21st Century

Home

Speaker Georg Rath
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time12:40 - 12:50
Event linkView original entry

This talk will give an overview over how we use Slurm to schedule the workloads of over 6000 scientists at NERSC, while providing high throughput, ease of use and ultimately user satisfaction.
With the emergence of data-intensive applications it was necessary to update the classic scheduling infrastructure to handle things like user defined software stacks (read: containers), data movement and storage provisioning. We did all of this and more through facilities provided by Slurm. In addition to these features we will discuss priority management and quality of service and how that can greatly improve the user experience of computational infrastructures.

This talk will be a walkthrough of the features that make Slurm great, using a supercomputing site as an example. All of the introduced interfaces are not specific to the site in question and can be used by the broader community.
After an a brief introduction of the workings of a workload manager/scheduler in general and Slurm in particular, we'll go into some of the features and how those open up possibilities for frictionless working with all kinds of use-cases, way beyond classical HPC workloads:
- Container integration
- Data staging
- On-demand filesystem provisioning
- On-the-fly job rewriting
- Cluster federation
- and a healthy plugin ecosystem



No previous knowledge of high performance computing or batch processing required.

Back

The Julia programming language

Home

Speaker Bart Janssens
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time13:00 - 13:25
Event linkView original entry

The Julia programming language is a high-level language, primarily developed for scientific computing. It uses just-in-time compilation to get a performance level that is comparable to C/C++. It was designed to overcome the “two-language problem”, where a proof-of-concept in a high-level language needs to be translated to a compiled language by specialists to get the required performance. In this talk, the main features of the language will be highlighted from the perspective of a “convert” coming from C++ and with a focus on scientific programming aspects. As an application, arrays and the work on making parts of the Trilinos library available will be discussed.

In the first part of the talk, a general overview of Julia will be presented. Julia is a typed language, where users can build their own types and write functions that operate on them. In this system, there are no “privileged” types, i.e. user defined types are treated equally to predefined types. A central concept in the system is “multiple dispatch”, where the function that is to be called is decided based on the passed arguments, thus making it possible to overload existing functions for new types. The decision on what function to call can happen both dynamically and at compile time, depending on the information available at compile time. As will be shown, this system, while very simple on the surface, results in surprisingly elegant and fast code.



In the second part, a concrete example, based on my Trilinos.jl package, will illustrate how an existing HPC library can be leveraged from within Julia. Current focus is on making the Belos solvers from Trilinos available, using the Tpetra sparse matrix stack. Views into the C++ data are available using the native Julia array syntax. As a concrete example, we have a 2D laplace example that demonstrates low-level linear system assembly using Julia but at the same speed as C++.

Back

Does data security rule out high performance?

Home

Speaker Adam Huffman
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time13:30 - 13:55
Event linkView original entry

Traditionally HPC systems assume they are in a secure, isolated environment and
as many barriers as possible are removed, in order to achieve the highest
possible performance. While these assumptions may still hold for traditional
simulation codes, many HPC clusters are now used for heterogeneous workloads.
Such workloads increasingly involve the integration of input data from a variety
of sources, notably in the life sciences. Scientists are now operating at the
population scale, where datasets are ultimately derived from real people. In
this talk we discuss some of the restrictions placed on the usage of such
datasets, how those restrictions interfere with the goal of high performance
computing, and some alternative strategies that meet the data requirements while
not hobbling the speed of analytical workloads.

HPC systems are by definition optimised to run user codes at the fastest
possible speed. Many of the normal safeguards and security procedures of Linux
systems are removed in furtherance of this goal. For example, firewalls are
often disabled and password-less SSH is usually enabled between nodes. The
parallel filesystems required for high performance often dictate further
security compromises. Normally these systems will be placed on an isolated
network, mitigating the risk to the wider infrastructure. In some commercial
organizations, an entire compute node will be dedicated to a user. Conversely,
the norm in academic clusters is for different users to share the nodes.



While the jobs running on these clusters were simulations, or were using data
without access concerns, none of the above approaches were problematic. Simple
POSIX permissions were sufficient to provide basic security and isolation. These
assumptions used to hold for life sciences HPC jobs too, where they were
operating on data obtained in vivo or from non-human organisms.



In recent years there have been a variety of efforts to obtain human data, often
at population scale. Examples include UK Biobank, the 100,000 Genomes project,
and The Cancer Genome Atlas (TCGA). A quotation from the latter illustrates the
ambitions of these sorts of initiatives:



"The Cancer Genome Atlas (TCGA) is a comprehensive and coordinated effort to
accelerate our understanding of the molecular basis of cancer through the
application of genome analysis technologies, including large-scale genome
sequencing."



https://cancergenome.nih.gov/



Scientists wishing to use TCGA data need to register and comply with access
policies:



https://wiki.nci.nih.gov/display/TCGA/Access+Tiers



As an example, I facilitated the download of 800TB of TCGA data on eMedLab, and
careful attention was needed to ensure that collaborators with the lab that had
signed the TCGA agreements were not able to see those data.



The 100,000 Genomes project organised by Genomics England only allows analysis
via their strictly controlled 'embassy' system:



https://www.genomicsengland.co.uk/the-100000-genomes-project/data/



https://www.genomicsengland.co.uk/about-gecip/for-gecip-members/data-and-data-access/



UK Biobank, which contains various data sources from 500,000 individuals, also
has strict data access policies:



http://biobank.ctsu.ox.ac.uk/showcase/exinfo.cgi?src=accessingdataguide



In some cases, researchers have to travel to specific locations where there are
physically isolated computers, in order to gain access to data.



Clearly these policies are directly in conflict with the barrier-free approach
that is normal in HPC facilities.



In this presentation we discuss possible approaches to compliance
with data licensing and security requirements, while allowing good performance
for researchers working on increasingly large-scale analyses.

Back

CrateDB: A Search Engine or a Database? Both!

Home

Speaker Maximilian Michels
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time14:00 - 14:25
Event linkView original entry

In this talk, I will give an introduction to CrateDB, its architecture, and demo a few things that people have built with it.

Search engines are databases that specialize in retrieving information from a data corpus. Compared to traditional databases like PostgreSQL, search engines allow you to work with text and other unstructured data very efficiently.



Projects like Xapian and Lucene can perform efficient indexing and querying of large amounts of documents. Projects like Solr and Elasticsearch have added clustering and distributed query execution to scale out the search features.



The most obvious gap between traditional databases and search engines is the query language. Whereas relational databases can typically be queried with SQL, search engines usually implement a custom search API.



At CrateDB, we don’t think you should have to give up SQL just because you’re using search engine features. That’s why we created a fully-functional SQL interface on top of Elasticsearch and Lucene. You get all the benefits of traditional databases, as well as the features of a distributed search engine.



Do you want to store huge amounts of data and search it in real time? Do you have unstructured and structured data? Do you want to run distributed joins? Do you want to add nodes and scale your cluster horizontally? Do you want to leverage the power of SQL? If so, CrateDB is a great match.

Back

Scaling Deep Learning to hundreds of GPUs on HopsHadoop

Home

Speaker Fabio Buso
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time14:30 - 14:55
Event linkView original entry

Scaling out deep learning training is the big systems challenge for the deep learning community.
Backed by a high performance distributed file system (HopsFS) and the support for GPU sharing and management in the cluster manager (HopsYarn), the HopsWorks platform provides different flavors of Tensorflow-as-a-Service and it offers several possibilities for parallelizing and scaling out deep learning.
In this talk we are going to present how data scientists can use the Hops to perform parallel hyperparameter searching, or how they can run traditional distributed Tensorflow on a big data cluster with the TensorflowOnSpark framework.
In particular, during the talk, we are going to focus on the last generation of distributed Tensorflow architectures which borrow topology and communication pattern from the HPC field. In the Ring-AllReduce architecture, workers are organized in a ring topology and communicate gradients updates without incurring in the communication bottleneck with the parameter server(s) that traditional distributed Tensorflow suffers from. Ring-AllReduce has been used by Facebook and IBM to reduce the training time on Imagenet from 2 weeks to ~45minutes/1 hour.
Finally, we will show how you can recreate the popular game "Quick, Draw!" using HopsWorks and Tensorflow services provided the platform.



The code is available on Github at https://github.com/hopshadoop

Back

AI on Microcontrollers

Home

Speaker Neil Tan
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time15:00 - 15:25
Event linkView original entry

The deployment of Deep learning technology today normally limited to GPU clusters, due to their computational requirements. For AI to be truly ubiquitous, its cost and energy efficiency needs to be improved. With the recent developments made in algorithms and MCUs, we introduce a deep-learning inferencing framework which runs TensorFlow models on MCU powered devices (with Mbed). In comparison to GPUs and mobile CPUs, MCU based devices are much more cost and power efficient. We believe this will open a new paradigm to AI and edge computing.

uTensor is a Machine Learning framework designed for IoT and embedded systems. Base on Mbed and TensorFlow, it is possible to run deep-learning models on an RTOS with memory requirement less than 256kB. Its binary size, ~50kB, is a desirable trait to most cost-sensitive systems.



In comparison to a typical ML environment today, running on GPU and APUs, the resource reduction uTensor offers is significant. This is achieved with techniques such as: quantization, garbage collection, minimal code dependency and network-architecture design.



The current frameworks consist of:



Operators: quantized kernels where most of the computations take place
Tensors: data structure and memory abstraction
Context: graph definition and resource management
Code generator: an TensorFlow to uTensor export (WIP)



The framework serves as an environment to test out the latest idea in edge-computing, and, eventually hope to empower developers to create next generation smart-devices. Currently, the framework is able to run a 3-layer MLP on MNIST dataset with an accuracy of 97.1%. More operators, tools and optimizations are at the work.



This talk aims to share the architecture, design decisions and backstories during the development. We hope the information will help those who wish to contribute to or build similar projects in the future.

Back

Productionizing Spark ML Pipelines with the Portable Format for Analytics

Home

Speaker Nick Pentreath
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time15:30 - 15:55
Event linkView original entry

The common perception of machine learning is that it starts with data and ends with a model. In real-world production systems, the traditional data science and machine learning workflow of data preparation, feature engineering and model selection, while important, is only one aspect. A critical missing piece is the deployment and management of models, as well as the integration between the model creation and deployment phases.



This is particularly challenging in the case of deploying Apache Spark ML pipelines for low-latency scoring, since the Spark runtime is ill-suited to the needs of real-time predictive applications. In this talk I will introduce the Portable Format for Analytics (PFA) for portable, open and standardized deployment of data science pipelines and analytic applications. I will also introduce and evaluate Aardpfark, a library I have created for exporting Spark ML pipelines to PFA, as well as compare it to other open-source alternatives available in the community.

Back

Accelerating Big Data Outside of the JVM

Home

Speaker Holden Karau
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time16:00 - 16:25
Event linkView original entry

Many popular big data technologies (such as Apache Spark, BEAM, Flink, and Kafka) are built in the JVM, and many interesting tools are built in other languages (ranging from Python to CUDA). For simple operations the cost of copying the data can quickly dominate, and in complex cases can limit our ability to take advantage of specialty hardware. This talk explores how improved formats are being integrated to reduce these hurdles to co-operation.

Many popular big data technologies (such as Apache Spark, BEAM, and Flink) are built in the JVM, and many interesting AI tools are built in other languages, and some requiring copying to the GPU. As many folks have experienced, while we may wish that we spend all of our time playing with cool algorithms -- we often need to spend more of our time working on data prep. Having to copy our data slowly between the JVM and the target language of computation can remove much of the benefit of being able to access our specialized tooling. Thankfully, as illustrated in the soon to be released Spark 2.3, Apache Arrow and related tools offer the ability to reduce this overhead. This talk will explore how Arrow is being integrated into Spark, and how it can be integrated into other systems, but also limitations and places where Apache Arrow will not magically save us.

Back

Nexmark A unified benchmarking suite for data-intensive systems with Apache Beam

Home

Speaker Ismaël Mejía
RoomH.1302 (Depage)
TrackHPC, Big Data, and Data Science
Time16:30 - 16:55
Event linkView original entry

NEXMark is an unpublished research paper that introduced a benchmarking suite for streaming systems. The Apache Beam community implemented (and enhanced) the examples of this paper as a series of benchmarks on top of Beam that can be run on different open source distributed processing engines e.g. Apache Spark, Apache Flink, etc. This talk discusses this experience and expects to engage new contributors to bring more ideas so we can eventually have a unified and semantically rich benchmarking standard for batch and streaming data-intensive systems a la TPC.

Back

The MySQL Ecosystem - understanding it, not running away from it!

Home

Speaker Colin Charles
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time09:10 - 09:35
Event linkView original entry

MySQL is unique in many ways. It supports plugins. It supports storage engines. It is also owned by Oracle, thus birthing two branches of the popular opensource database: Percona Server and MariaDB. It also spawned a fork: Drizzle. You're a busy DBA having to maintain a mix of this. Or you're a CIO planning to choose one branch. How do you go about picking? Supporting multiple databases? Find out more in this talk.

MySQL is a unique adult in many ways. It supports plugins. It supports storage engines. It is also owned by Oracle, thus birthing a branch and fork of the popular opensource database: Percona Server and MariaDB Server.



You're a busy DBA thinking about having to maintain a mix of this. Or you're a CIO planning to choose one branch over another. How do you go about picking? Supporting multiple databases? Find out more in this talk. Also covered is a deep-dive into what feature differences exist between MySQL/Percona Server/MariaDB Server. Within 20 minutes, you'll leave informed and knowledgable on what to pick.



A base blog post to get started: https://www.percona.com/blog/2017/11/02/mysql-vs-mariadb-reality-check/

Back

Beyond WHERE and GROUP BY

Home

Speaker Sergei Golubchik
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time09:40 - 10:05
Event linkView original entry

We've been writing SQL queries with WHERE, GROUP BY, ORDER BY, HAVING
for decades. But nobody is using DOS 3.2 or Windows 1.0 anymore - why
limit yourself to SQL:86? The latest versions of MariaDB support the
features of SQL:99 (common table expressions), SQL:2003 (window
functions), SQL:2011 (system-versioned tables), and SQL:2016 (JSON),
which allows you to build more complex (for example, hierarchical)
models data and write simpler and faster queries.

Back

MySQL 8.0 Performance: InnoDB Re-Design

Home

Speaker Dimitri Kravtchuk
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time10:10 - 10:35
Event linkView original entry

MySQL 8.0 brings many fundamental changes in InnoDB design.
This talk will cover several of them, including new REDO and CATS.

MySQL 8.0 brings many fundamental changes in InnoDB design.
This talk will cover several of them, including new REDO and CATS.

Back

MySQL 8.0 Roles

Home

Speaker Giuseppe Maxia
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time10:40 - 11:05
Event linkView original entry

MySQL 8.0 introduced roles: a new security and administrative feature that allows DBAs to simplify user management and increases security of multi-user environments. The syntax for roles requires some adaptation. This talk will guide users through the intricacies of the new feature.

MySQL 8.0 introduced roles: a new security and administrative feature that allows DBAs to simplify user management and increases security of multi-user environments. Using roles is easy, once you have digested all the documentation. For the uninitiated, though, the approach could be disappointing, and even give the feeling of not working at all. This quick demo will show some examples of how to deal with roles for several scenarios, how to assign roles to users, and how to use them effectively. Since there are several ways of assigning roles, the examples will cover both the roles granted as default and the cases where users can switch from one role to another within a session.

Back

Histogram support in MySQL 8.0

Home

Speaker Øystein Grøvlen
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time11:10 - 11:35
Event linkView original entry

In MySQL 8.0, you can create histograms over column values. Histograms will improve the selectivity estimates used by the query optimizer, especially for conditions on columns that are not indexed. This presentation will cover the types of histograms you can create, and discuss best practices for using histograms. The presentation will contain many practical examples of how histograms improve query execution plans.

Back

Let's talk database optimizers

Home

Speaker Vicentiu Ciorbaru
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time11:40 - 12:05
Event linkView original entry

Whenever you write a SQL query, the database query optimizer tries to find the best possible plan to retrieve your data. The question is, what can it do and how does it do it? In this talk we will look at recent developments in query optimizers from the major database providers. With a focus on the MySQL world, we will be looking at ways to "help" the query optimizer to come up with better plans.

Back

TLS for MySQL at large scale

Home

Speaker Jaime Crespo
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time12:10 - 12:35
Event linkView original entry

At the Wikimedia Foundation we aim for perfect privacy of our users. That means not only enforcing TLS (https) between our users and the datacenters but all intermediate steps, including database access.



When you are a top 5 website with hundreds of thousand of queries per second and billions of users but a very limited budget, that is not easy, specially for MySQL. This is a description of our experience, including operational and performance pain points, of rolling out encryption.

Back

MySQL InnoDB Cluster

Home

Speaker Miguel Araújo
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time12:40 - 13:05
Event linkView original entry

MySQL InnoDB Cluster provides a built-in High Availability solution for MySQL. It tightly integrates MySQL Server, Group Replication, MySQL Router and MySQL Shell providing an easy-to-use full stack solution for HA.



MySQL Shell main goal is to provide a natural interface for all 'DevOps' tasks related to MySQL, by supporting scripting with development and administration APIs. To allow an easy and straightforward configuration and administration of InnoDB Clusters, the Shell provides a scriptable API - the AdminAPI. This API hides the complexity associated with configuring, provisioning and managing everything without sacrificing power, flexibility or security.



Join this session to understand the key points of MySQL InnoDB Cluster and to learn how to use the Shell and the AdminAPI to configure and manage InnoDB Clusters.

Back

AMENDMENT Why We’re excited about MySQL 8

Home

Speaker Peter Zaitsev
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time13:10 - 13:30
Event linkView original entry

There are many great new features in MySQL 8, but how exactly can they help your application? This session takes a practical look at MySQL 8 features and discusses which limitations of previous MySQL versions are overcome by MySQL 8 and what you can do with MySQL 8 that could not have been done before.



Please note that this talk replaces one entitled "Experiences with testing dev MySQL versions and why it's good for you" that was due to have been given by Simon Mudd, who has sent his apologies but wasn't able to deliver it.

There are many great new features in MySQL 8, but how exactly can they help your application? This session takes a practical look at MySQL 8 features and discusses which limitations of previous MySQL versions are overcome by MySQL 8 and what you can do with MySQL 8 that could not have been done before.

Back

MySQL Test Framework for Support and Bugs Work

Home

Speaker Sveta Smirnova
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time13:40 - 14:05
Event linkView original entry

MySQL Test Framework (MTR) provides unit test suite for MySQL. Tests in the framework are written by MySQL Server developers and contributors and run to ensure build is working correctly.



I found this is not the only thing which can be done with MTR. I regularly use it in my Support job to help customers and verify bug reports.



With MySQL Test Framework I can:






Everything with single script which can be reused on any machine any time with any MySQL/Percona/MariaDB Server version.



In this session I will show my way of working with MySQL Test Framework and I hope you will love it as I do!

Back

AMENDMENT ProxySQL - GTID Consistent Reads

Home

Speaker René Cannaò
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time14:10 - 14:35
Event linkView original entry

This talks about the very new development of ProxySQL including GTID coordination ! Breaking News !!



Please note that this talk replaces one entitled "Instant ADD COLUMN for InnoDB in MariaDB 10.3+ " that was due to have been given by Valerii Kravchuk, who has sent his apologies but is now unable to attend.

Back

Turbocharging MySQL with Vitess

Home

Speaker Sugu Sougoumarane
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time14:40 - 15:05
Event linkView original entry

Vitess has been in development since 2010, and has recently started gaining traction in the community. In this session, we'll cover the three major problems it solves: protecting MySQL instances, moving to the Cloud, and scaling indefinitely.

Back

Orchestrator on Raft: internals, benefits and considerations

Home

Speaker Shlomi Noach
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time15:10 - 15:35
Event linkView original entry

Orchestrator operates Raft consensus as of version 3.x. This setup improves the high availability of both the orchestrator service itself as well as that of the managed topologies, and allows for easier operations.



This session will briefly introduce Raft, and elaborate on orchestrator's use of Raft: from leader election, through high availability, cross DC deployments and DC fencing mitigation, and lightweight deployments with SQLite.



Of course, nothing comes for free, and we will discuss considerations to using Raft: expected impact, eventual consistency and time-based assumptions.



orchestrator/raft is running in production at GitHub, Wix and other large and busy deployments.

Back

MyRocks roadmaps and production deployment at Facebook

Home

Speaker Yoshinori Matsunobu
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time15:40 - 16:05
Event linkView original entry

We recently finished migrating from InnoDB to MyRocks in our user database (UDB) at Facebook. We have been running MyRocks in production for a while and we have learned several lessons. In this talk, I will share several interesting lessons learned from production deployment and operations, and will introduce future MyRocks development roadmaps.

We recently finished migrating from InnoDB to MyRocks in our user database (UDB) at Facebook. We have been running MyRocks in production for a while and we have learned several lessons. In this talk, I will share several interesting lessons learned from production deployment and operations, and will introduce future MyRocks development roadmaps.

Back

ProxySQL's internal: implementation details to handle millions of connections and thousands of servers

Home

Speaker René Cannaò
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time16:10 - 16:35
Event linkView original entry

ProxySQL is a MySQL protocol aware, reverse proxy for database servers using the MySQL protocol, ranging from standalone MySQL/MariaDB/Percona, to clustering solution like Galera/PXC and Group Replication, to cloud platforms like RDS and Aurora. It is designed to handle millions of distinct users, millions of connections, and thousands of servers.
In this session we will cover the internals that allow to efficiently handle traffic of large scale-out MySQL deployments. Specifically we will cover:
- threading model and connections handling
- non-blocking, async network I/O
- state machine related to session tracking and management
- traffic routing
- backends monitoring

Back

MySQL Point-in-time recovery like a rockstar!

Home

Speaker Frédéric Descamps
RoomH.1308 (Rolin)
TrackMySQL and Friends
Time16:35 - 17:00
Event linkView original entry

Point-in-time recovery can be very long when a large amount of binary logs must be replayed.
During this session I will show how this can be accelerated without using any special external tool and how we can benefit from MySQL replication improvements even on a stand-alone server

Back

Build your own Skype... in the browser

Home

Speaker Steven Goodwin
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time09:00 - 09:20
Event linkView original entry

In this session, Steven Goodwin will practically demonstrate the various "moving parts" necessary to build a WebRTC application, by creating one live on stage. It will provide a better understand of how WebRTC applications work under the hood, what the API provides, and serve as a guide as to what is (and what is not) possible for developers to deliver using WebRTC, and what other technologies are needed for a full-blown solution.

The talk covers:






It also details the business logic cases of how to take a basic WebRTC app and then to build a purely online VoIP product, such as a call centre, thereby demonstrating the extra work required to work with WebRTC, beyond the promotional blurb.

Back

Writing a Janus plugin in Lua

Home

Speaker Lorenzo Miniero
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time09:25 - 09:40
Event linkView original entry

Janus is written in C, and so are its plugins. That said, we recently implemented a Janus Lua plugin, that allows developers to use Lua scripts to drive the media/application logic instead.

Janus is a general purpose and modular WebRTC server. It can be easily extended by writing new plugins, whether for new transports for the Janus API, or for handling the media related to a PeerConnection in a custom way. Out of the box, Janus ships several plugins implementing different features (e.g., an SFU, an audio MCU, a SIP gateway, etc.), but developers interested in adding custom functionality can write their own plugins to handle media their own way. That said, Janus is written in C, which means that plugins are typically expected to be written in C as well. This can be an impediment for those not proficient with C as a programming language. As such, we recently released a new Janus plugin that acts as a bridge between the C code and Lua scripts: developers can write their logic in Lua, while leaving the media manipulation and routing to C helper methods that are exposed as Lua functions scripts can take advantage of. This presentation will introduce this plugin, how it works, and how it can be used to write custom logic with different Lua scripts: a couple of real use cases will be presented as a proof of concept.

Back

XMPP as the road to innovation

Home

Speaker Bartłomiej Górny
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time09:45 - 10:05
Event linkView original entry

Contrary to basic logic and common misconceptions, the “X” in XMPP does not stand for “eXtra” or “eXtremely awesome” but for “eXtensible”. This actually tells you a lot about a protocol built to provide reasonable, solid set of of basic features but also a platform for innovative solutions tailored to the project’s needs.
The extension mechanism is the heart of this philosophy and also the focus of this talk. We will explore real-life use cases and scenarios where extended XMPP served as a base for a dedicated solution - both for commercial and open-source projects. We will examine different functionalities and components (e.g. inbox, MUCLight, token based reconnection), share experiences, discuss best practices and dissect it all in MongooseIM platform.
This talk is not going to take long. As you know, extending XMPP is kinda easy.

Back

Kamailio - Pick Your SIP Routing Scripting Language

Home

Speaker Daniel-Constantin Mierla
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time10:10 - 10:30
Event linkView original entry

Kamailio is an open source SIP server that uses a scripting language for its configuration file to enable flexibility in deciding the routing of SIP messages. Starting with version 5.0, besides its native scripting language, Kamailio allows writing the routing logic in several other programming languages such as Lua, JavaScript, Python and Squirrel. This presentations aims to reveal the benefits and drawbacks of using any of these scripting languages for building scalable real time communication systems and services.

Real time communications evolved at a high pace in the last decade, no longer being an isolated service used primarily for voice and text sessions. Connectivity with other social media channels, integration of artificial intelligence or interaction with smart home or work environments requires lot more flexibility inside the SIP routing engine. Current version of Kamailio tries to cover the needs from fast routing needs of a performant load balancer, with a self designed scripting language, to the fast prototyping and development of innovative features, with a new scripting framework that allows leveraging the large set of extensions and libraries offered by languages such as Lua, JavaScript or Python.



This presentation focuses on the Kamailio Embedded Interface (KEMI) framework, going through the supported scripting languages, their benefits and drawbacks. Doors are now open to write Kamailio extensions also for people familiar with these programming languages, no longer only for C programmers, diversifying the sources of developers that can join the community of the project.

Back

Asterisk Project: Do I see video in the future?

Home

Speaker Matthew Fredrickson
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time10:35 - 10:50
Event linkView original entry

Haven't heard about what's new in the world of Asterisk or missed Astricon this last year? This is your opportunity to get filled in on what's happening and try to figure out how you can utilize new features in Asterisk in your network. Plan is to cover what's happened since the last major release of Asterisk and cover what will be going into the next one.

This talk should cover topics ranging from what's happened in the last year with the project to what is on the horizon for the next big release of Asterisk. Should cover the what's been introduced into Asterisk since the 14.0.0 was released all the way up to what will (probably) be going into the 16 release. Prepare to be amazed!

Back

Speech-to-Text in Jitsi Meet

Home

Speaker Nik Vaessen
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time10:55 - 11:10
Event linkView original entry

In this talk I want to present my work for GSoC 2016 and 2017, which involved implementing speech-to-text APIs in Jitsi Meet. The goal was to be able to provide real-time subtitles for hearing-impaired as well as a way to deliver a transcript afterwards.

Back

webPh.one - connect community cellular networks using WebRTC and PWA

Home

Speaker Stefan Sayer
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time11:15 - 11:35
Event linkView original entry

How do we in 2017 create a mobile, cross-platform App to connect people in the internet to community GSM networks for calls and texts with a small team and limited time? SayCel and Rhizomatica, together with Altermundi and other individual contributors, have created an open source WebRTC dialer as Progressive Web App and a WebRTC gateway to solve this not so trivial problem.

Community Cellular (GSM) networks usually connect to the PSTN through SIP trunks that are connected via an Internet back-haul. For outgoing calls, PSTN termination costs have to be paid, and for incoming calls the caller usually tediously needs to enter the extension after connecting to a dial-in number, and it's not possible to directly send text messages. This project uses WebRTC and Progressive Web App (PWA) technologies, implemented with the open source Kamailio+rtpEngine WebRTC gateway, SEMS, and an Angular app to connect people around the world on their smartphones and laptops directly to the users on the community cellular network for calls and texts. The open source webPh.one dialer is also an interesting technology base for other projects that need an App to connect to a SIP network. The talk gives a short intro to the two community cellular networks that have been connected (17 villages in Oaxaca, Mexico; PearlCel in Nicaragua), the problems that had to be solved and technologies used while doing so, and the solution this project created. It also gives our experience where today the limits of a pure PWA are, and shows our efforts to create docker-compose images for the infrastructure part.



About Rhizomatica: Through efforts around the world, Rhizomatica uses new information and communication technologies, especially mobile telephony, to facilitate community organization and personal and collective autonomy. Rhizomatica's approach combines regulatory activism and reform, development of decentralized telecommunications infrastructure, direct community involvement and participation, and critical engagement with new technologies. Rhizomatica works with TIC AC in Mexico, operating community communications networks of 17 indigenous villages in Oaxaca.



About SayCel: SayCel is a research and development company dedicated to creating communications and infrastructure solutions for developing communities. SayCel currently works with local governments on Caribbean Coast of Nicaragua to increase the mobile communications services in the region. SayCel installs wifi data backhaul, rural fiber, and ethernet over coax. SayCel provides maintenance and training and in return, local governments are able to have a sustainable communications utility that they can use to lower the cost of communication for citizens, increase efficiency in their local government process, improve security through a local 911 service, and assist in growing their local economies. SayCel was supported by NYU's RiskEcon Lab and UNICEF Innovation Fund for the development of this project.

Back

Kids and Schools and Instant Messaging

Home

Speaker Dominik George
Niels Hradek
Philipp Stahl
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time11:40 - 12:00
Event linkView original entry

We want to discuss experiences, issues, and ideas concerning the use of free/open IM services among children and in education.

Instant Messaging is the one means of communication children and adolescents use in their free-time, and gets more important in education while classmates (or even worse, teachers) create chatrooms for their classes on non-free, privacy-unaware services. Teckids e.V. is tackling this issue in Germany, and we want to share our experiences in doing so.



What makes children use IM?
What do they want or need?
What are the difficulties in switching to a free/open service?
What tools did we try, and how do they fit in?



We also want to listen to other people's experiences, and share thoughts on what developers of tools might need to take into account in order to make existing tools ready for use by children and in education.

Back

OpenDHT: make your project distributed

Home

Speaker Adrien Beraud
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time12:05 - 12:25
Event linkView original entry

OpenDHT provides an simple way to build distributed software by providing an easy to use but powerful API in C++11 and Python 3. We will present OpenDHT and its possibilities to easily build fully or partially distributed software. We will also present new OpenDHT features and use cases and discuss about future developments.

OpenDHT is a kademlia distributed hash table (DHT) library written in C++11, used by the Ring distributed communication platform. It's published under the GPL licence, version 3 or higher (GPLv3+).



We will present OpenDHT and it's API, and show how you can use it to build distributed applications.



We will present new OpenDHT features since last year, such as the implemented DDOS protections strategies, the new easy to use systemd service and the REST API to use OpenDHT from Web apps.



Finally, we will discuss the future of OpenDHT and upcoming features.

Back

Open communication in WebVR with Matrix!

Home

Speaker Matthew Hodgson
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time12:30 - 12:50
Event linkView original entry

VR/AR has a huge problem: there isn't any standard way to communicate with other people. This talk will demo how Matrix can be used as an open signalling layer for establishing WebRTC voice, video and 3D video calls in WebVR on anything from a Cardboard to a Vive: providing a decentralised communication ecosystem to build an open metaverse!

One of the most fun areas Matrix.org has worked on this year has been adding interoperable WebRTC calling to WebVR using matrix-js-sdk, A-Frame and the latest WebVR browser support and hardware (Cardboard, Rift & Vive). Rather than communication in WebVR being fragmented into different silos and websites, we hope that Matrix will provide an open decentralised ecosystem to power communication within a decentralised metaverse of WebVR environments. We'd like to show off our demo of full-mesh end-to-end encrypted videoconferencing in WebVR, as well as interoperability with the rest of the Matrix ecosystem. Finally, we're hoping to demonstrate some of our work in transmitting 3D camera depth-buffer and mesh data over WebRTC media & datachannels in order to bring 3D video calling in VR to life!

Back

Scaling messaging systems

Home

Speaker Michał Piotrowski
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time12:55 - 13:15
Event linkView original entry

When it comes to messaging servers, scalability is the key and knowing how to fit more users and achieve better latency is the most important trick in your bag. In this talk we’re going to explore different ways of scaling an open source XMPP server - MongooseIM.

XMPP servers can handle hundreds of users basically out of the box. Really good ones can easily go to a few thousands. The fun starts when your app gets traction and all of the sudden you have to handle tens of thousands, then hundreds, and with a blink of an eye - millions of online users.
MongooseIM is built with scaleability in mind exactly for that case - your app starts small and then grows to match your imagination. Let’s explore the many ways in which you can make it scale. We will cover the impact of using bigger machines, different clustering techniques, benefits and challenges of multi-cluster setups. All those options have their traps and limitations, which we will explore in detail. You will also learn what we do on daily basis to monitor, maintain and improve MongooseIM's scalability.

Back

aiosip: the efficient swiss-army knife of SIP

Home

Speaker Ludovic Gasc
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time13:20 - 13:40
Event linkView original entry

In the SIP world, you have mainly B2BUA like (Asterisk, Freeswitch..) and Proxies (Kamailio, OpenSER...)
But contrary to the HTTP world, you have few implementations in a pure high-level language like Python, Ruby...



With several concrete examples in testing, benchmarking, and call control (uaCSTA), we hope to show the interest to be able to re-use a programming language ecosystem.

Our goal with aiosip isn't to re-implement a full-monty SIP proxy like Kamailio nor a B2BUA like Asterisk, but to show that it can be easy to have custom SIP dialogs.



The main interest to have a pure implementation in a specific programming language is that, if you are developing in this language, it's easier to manipulate and modify SIP packets.

Back

Building a WebRTC gateway

Home

Speaker Julien Chavanton
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time13:45 - 14:05
Event linkView original entry

Gateway calls using WebRTC on the server side for bridging calls to traditional VoIP. (including transcoding)

After experimenting and reaching the point where we can send and receive samples from a native app running, it seems feasible to use WebRTC to create gateway to bridge calls.
Using the native API this can be done without forking using a webRTC audio device module. We can already demo the work in progress with using an audiodevicemodule to play and record.



The second leg of the call bridge could potentialy be implemented using WebRTC, by using the API to disable RTCP MUX, DTLS and controlling ICE.
another alternative using MediaStreamer2/oRTP could be used.



The signaling gateway part will be rudimentary and and example using a provided rudimentary Kamailio module is provided.

Back

Whisper and Swarm Protocol for RTC

Home

Speaker Nick (ethereumnick)
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time14:10 - 14:30
Event linkView original entry

Whisper is the “plausibly deniable routing” protocol within Ethereum. We will outline its intended use cases, it’s advantages, topology and the uses to which it is being put today.



When it comes to pear to pear and person to person communication in an untrusted network cryproeconomicly incentivised and decentralised system are a viable alternative to altruistic services like Tor or proprietary systems like whatsapp etc.



The Swarm protocol (which is largely designed for the storage and dissemination of larger amounts of data) contains PSS (Postal Service over Swarm)



with the following Goals



Enable messaging between nodes that are not directly connected through IP.
Allow full, partial and no disclosure of addresses of communicating nodes. ("luminosity")
Asymmetric and symmetric encryption using ephemeral keys.
Transparent implementation of devp2p protocols over pss.
Decentralized storage of undelivered messages. (mailserver)
Create a fully decentralized end-user messaging platform that's end-to-end encrypted.


Status, is an ethereum light client for Android and iOS (and soon for desktop) which uses Whisper for text based chat applications. Conceived as a mobile OS for Ethereum, status now combines a messenger, a browser and a chatbot interface that can act as a chat like, command line tool. Status is a user friendly, privacy respecting, gateway for everyday smartphone users to begin consuming, routing and serving Ðistributed Application on ethereum.

In the serverless model that ethereum uses one method that distributed applications can use for machine to machine communication is ethereum’s Whisper protocol.



Whisper may be the appropriate protocol if your use case:
Needs to preserve a level of anonymity or plausible deniability for either or both message originators and recipients.
Your Dapps need to coordinate before sending a final transaction. For example a decentralised market or token exchange where two clients may need to settle a deal before actually initializing a transaction, or sensors of some kind aggregating data or updating one another.
Publish small amounts of information that don’t need to persist but instead live for a limited amount of time. From minutes to several days. If your data needs to live for longer than that, then nodes can be incentivised (from outside the protocol) using a token model for example to store and forward messages.
You wish to propagate time bound but non time critical updates to n recipients. For instance, tweets, weather updates, traffic reports, IOT metrics etc.



Some specifics of the Whisper protocol are:



The API is only exposed to contracts, never to user accounts.
Low-bandwidth: Only designed for smaller data transfers.
Unpredictabe latency: Not designed for real-time communication of data.
1-1 or 1-N communication.
PoW (spam throttling) in Whisper



  Whisper messages (envelopes) undergo a process called “sealing”, which basically incentivises users to spend computational resources prior to sending the message. Internally, sealing involves hashing contents of the message repeatedly into the smallest possible number.
What this means to you as a developer is that if you want your messages to have priority over others on the network, you’ll need to spend more computational resources “sealing” them. You can specify this through the `work` parameter when posting a new message. The more work that you perform locally, the faster your message will propagate through the network.
Note that there is no way to specify exact latency times, rather you can only estimate based on how much work you perform during the “sealing” process.


TTL



TTL is the amount of time your messages live on the network, specified in seconds.
Filtering
Whisper is low-level, meaning that it is identity-based instead of application-based. Messages, therefore, can be sent directly to another participant by using the recipient’s public key. Listeners on the other hand, can filter messages by specific senders or specific topics.
With web3 you can listen for Whisper messages like this:



Swarm



Whisper in action for RTC.



Case study - Status.im
There is a clear trend towards mobile first computing and chat applications are the dominant medium for personal communication on mobile. To provide a decentralised and distributed infrastructure that is private, robust and sustainable (even in a hostile network) requires the balancing of incentives between ‘providers’ and ‘consumers’.



Status uses a token model to incentivises the existence of Nodes that store and forward Whisper messages to subscribers who fund their data acquisition through micropayments of ERC20 tokens.



Status Road Map

Back

The RTP bleed and what can we do?

Home

Speaker Peter Lemenkov
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time14:35 - 14:55
Event linkView original entry

Back

Real Time Clustering with OpenSIPS

Home

Speaker Razvan Crainea
Liviu Chircu
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time15:00 - 15:15
Event linkView original entry

An in-depth view of the feature set of the upcoming OpenSIPS 2.4 clustering support (release due March 2018!). Built with targets in mind such as flexibility, robustness, stability and ease-of-use while also being easy to provision and reuse by higher-level modules, the latest OpenSIPS cluster is better than ever!

Join this talk if you want to get a good understanding on what the OpenSIPS clustering support is and how it works. Starting with the architecture and requirements of the communications system as a whole, the presentation will continue with the runtime behavior for the cluster. This will teach you what to expect from an OpenSIPS cluster, how to dynamically resize it and how it behaves in the eventuality of cross-site network link failures.



In the second half, we will dissect a natural use case for the clustering support: a distributed SIP user location service. There are a couple of interesting problems associated with this type of service (NAT traversal, SIP contact pinging, etc.), both of which become a lot more straight-forward to solve using an underlying clustering engine.

Back

HOMER 7

Home

Speaker Lorenzo Mangani
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time15:20 - 15:35
Event linkView original entry

HOMER 7 is the latest generation of our FOSS RTC and VoiP Capture Framework, focused on integration, modularity and multi protocol support.

HOMER 7 is the latest generation of our FOSS RTC and VoiP Capture Framework, focused on integration, modularity and custom protocol support.
Ships statistics and custom detections via direct integration with Elasticsearch, InfluxDB, Prometheus, Graylog and many more.
Allows handling multiple protocols, including custom onesusing the new dynamic JSON HEP types and auto-mapping and indexing features in the new nodejs backend.
Modularize your setup to provide the functionality you need and distribute the capture overhead.

Back

Using CGRateS as online Diameter/Radius AAA Server

Home

Speaker Dan Christian Bogos
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time15:40 - 15:55
Event linkView original entry

Diameter and Radius are protocols heavily used by operators in today's communication networks (LTE, WiFi, etc).
In this talk Dan will review CGRateS architectural components needed to create a complete and generic Diameter/Radius Authorization and Accounting server solution.
CGRateS is a battle-tested Enterprise Billing Suite with support for various prepaid and postpaid billing modes.

Back

SIP based group chat with Linphone

Home

Speaker Simon Morlat
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time16:00 - 16:20
Event linkView original entry

For many years, Linphone has been one of the most active free communication software. Originally focused on voice, aditionnal functionalities were rapidly added like video, instant messaging and presence. From the beginning, Linphone follows IETF's standards, for both media and signaling. On the signaling part, Linphone does implements many SIP based RFC for call establishment, presence and instant Messaging. Today, group chat function is widely available on most popular communication applications, specially in the closed source world. As a free SIP communication application, Linphone aims to provide a free alternative for group communication.



On SIP world, Group chat is handled as a particular case of realtime group communication. The basic RFC is the https://tools.ietf.org/html/rfc4353, Rich Communication Service (RCS) endorses many conferencing RFCs to specify how group functions shall be implemented. For Linphone, we decided to follow the same path, but with always keeping in mind to avoid complex development not bringing essential functionality. Resulting implementation can be defined as an "Adhoc pager mode conferencing" with the idea of Long Term Conference leveraging on SIMPLE IM for message transport instead of MSRP.



This discussion will focus on both interpretation we made of existing SIP standards, implementation challenges and future extensions.

Back

Fundraising and Crowdfunding for FreeRTC

Home

Speaker Daniel Pocock
RoomH.1309 (Van Rijn)
TrackReal Time Communications
Time16:25 - 16:45
Event linkView original entry

Which projects would like to raise funds? What are the most important things you could achieve if people donate? Could we share the effort of running a crowdfunding campaign?

Back

Introduction to Swift Object Storage

Home

Speaker Thiago da Silva
RoomH.2213
TrackSoftware Defined Storage
Time09:00 - 09:40
Event linkView original entry

Object Storage is a "relatively" new storage architecture being widely used in cloud computing. This talk will introduce object storage concepts and use cases and how they are different from block and file storage. As an example, we will provide an overview of Openstack Swift, its main features as well as talk about what's in the works for future releases. Swift is a highly available, distributed, eventually consistent object store. It is used at organizations like Wikipedia, OVH, Ebay and many more across the globe to store lots of data efficiently at a massive scale. It provides a RESTful API for data access, making it ideal for use with web applications. This talk will also provide a demo usage of Swift.

Outline






Length: 30+5

Back

Gluster-4.0 and GD2

Home

Speaker Kaushal M
RoomH.2213
TrackSoftware Defined Storage
Time09:45 - 10:25
Event linkView original entry

The new Gluster-4.0 release with GlusterD-2.0 makes Gluster easier to scale, manage, integrate and develop for. Learn how this benefits users and developers.

Gluster is free and open source software scalable network filesystem. Gluster can be installed and easily configured on commodity hardware to provide a general purpose, posix-compliant storage system. Gluster can be accessed from anywhere using the traditional NFS and SMB protocols, or with the native Gluster client. A more complete overview of Gluster and its capabilities can be found here.



Gluster-4.0 is the new upcoming major release of GlusterFS, planned for very early 2018. Gluster-4.0 brings in many new changes and features, the biggest being GlusterD-2.0.



GlusterD is the native distributed management framework of Gluster. It is a core part of the Gluster experience. GlusterD-2.0 is a ground-up reimplementation of the management framework for Gluster-4.0, that is more scalable, easier to use and easier to integrate with than before.



This presentation will provide a attendees with an overview of Gluster, Gluster-4.0 and GlusterD-2.0.
New users will get to know what Gluster is, and how it can help their storage needs.
Existing users will get to know what to expect when upgrading to Gluster-4.0, and prepare for it.
Developers will get to know how they Gluster-4.0 and GD2 make it easier to develop for and integrate with Gluster.

Back

LizardFS - a year in development

Home

Speaker Michal Bielicki
RoomH.2213
TrackSoftware Defined Storage
Time10:30 - 11:10
Event linkView original entry

Community Update and Roadmap for the next 12 months including an interactive presentation of the most important new features.

LizardFS is a fault tolerant, distributed, parallel and easy to use POSIX file system



LizardFS has had a massive push in the last 12 months.






The presentation includes what has been used to implement those new features and how you can utilize them.
There will be time for questions and we will try to answer them all.



At the end we will present the plans for the next 12 months of development.

Back

Geographically distributed Swift clusters

Home

Speaker Alistair Coles
RoomH.2213
TrackSoftware Defined Storage
Time11:15 - 11:55
Event linkView original entry

The OpenStack Swift object storage service achieves high availability and durability by replicating object data across multiple object servers. If one disk or object-server fails then its data is always available on other object servers. Swift is able to apply this principle to not just disks and object servers but also independent availability zones and regions. This enables the deployment of so-called Global Clusters which provide a single object namespace spanning multiple geographically dispersed data-centres, each offering independent local access to object data.



This talk will describe some of the mechanisms that Swift provides for configuring and optimising Global Clusters. We will briefly describe how Swift's consistent hashing Ring maps objects to object servers and how that mapping algorithm can be configured to distribute copies of each object across data-centres. We will show how Swift's read and write affinity settings can be used to optimise WAN traffic in a Global Cluster. Finally we will discuss some of the challenges we faced when implementing Global Cluster support for erasure coded objects, and how those were overcome by enhancements to the Ring and the erasure coding write path.

Talk outline:




Back

Container Attached Storage (CAS) with OpenEBS

Home

Speaker Jeffry Molanus
RoomH.2213
TrackSoftware Defined Storage
Time12:00 - 12:40
Event linkView original entry

The OpenEBS project has taken a different approach to storage when it comes to containers. Instead of using existing storage systems and making them work with containers; what if you were to redesign something from scratch using the same paradigms used in the container world? This resulted in the effort of containerizing the storage controller. Also, as applications that consume storage are changing over, do we need a scale-out distributed storage systems?

The OpenEBS project has taken a different approach to storage when it comes to containers. Instead of using existing storage systems and making them work with containers; what if you were to redesign something from scratch using the same paradigms used in the container world? This resulted in the effort of containerizing the storage controller. Also, as applications that consume storage are changing over, do we need a scale-out distributed storage systems?



Also, as we in effect have a storage instance per containerized applications, what other benefits could we get? Is there something we can do now that we have a one to one mapping between application and controller? We like to think so and will go over a couple.



Finally, how can we make enterprise-class like storage features (i.e., snapshots, clones, compression, replication, etc.) available in a typical container in such a way, that we do not depend on OS and or cloud provider specifics?

Back

Debugging A Live Gluster File System Using .meta Directory

Home

Speaker Rafi KC
RoomH.2213
TrackSoftware Defined Storage
Time12:45 - 13:10
Event linkView original entry

Meta is a client side xlator which provide an interface similar to the Linux procfs, for GlusterFS runtime and configuration. The contents are provided using a virtual hidden directory called .meta which is inside the root of GlusterFS mount.

.meta is an efficient way to extract information from a live running mount process with out using gdb , systemtap or any such tools. This doesn't require any prior knowledge about any tools, the information's are given as directory / file tree structure. So for a users this is the most simplest way to start looking to complicated problems and developers can get information with out disturbing the cluster.



I will be covering the following topics
* What is .meta layer and current state of meta layer,
* Information's that can be fetched through .meta directory
* Debugging with .meta directory (for both developers and users)
* How to debug graph related issues
* latency and mallinfo
* Fop history and fuse history using meta
* Current running frames or pending frames
* Enhancement planned for meta layer
* Other troubleshooting options like statedump,io-stat, etc

Back

Ceph management with openATTIC

Home

Speaker Kai Wagner
RoomH.2213
TrackSoftware Defined Storage
Time13:15 - 13:55
Event linkView original entry

openATTIC is an Open Source Management and Monitoring System for the Ceph distributed storage system.



Various resources of a Ceph cluster can be managed and monitored via a web-based management interface. It is no longer necessary to be intimately familiar with the inner workings of the individual Ceph components.



Any task can be carried out by either using openATTIC’s clean and intuitive web interface or via the openATTIC REST API.



openATTIC itself is stateless - it remains in a consistent state even if you make changes to the Ceph cluster's resources using external command-line tools.



If you're interested in the dramatic changes and improvements we made since last year and you want to take a look at the newest version of openATTIC, this is the right talk for you.


Back

Developing applications with Swift as Storage System

Home

Speaker Christian Schwede
RoomH.2213
TrackSoftware Defined Storage
Time14:00 - 14:40
Event linkView original entry

Swift comes with a wide variety of API features to build extremely scaleable and durable applications. It uses an RESTful API, making it easy to use and embed in your own applications.
One of the great characteristics of (web) applications using Swift as a backend is the separation between application logic and data path. This helps a lot to write lightweight apps in a scaleable way.



During this talk we'll give you an overview about the included features, how to use them and examples how implementations might look like. An example web application walktrough & demo will showcase why all of these features help you to build your own application.


Back

Ceph & ELK

Home

Speaker Abhishek Lekshmanan
Denis Kondratenko
RoomH.2213
TrackSoftware Defined Storage
Time14:45 - 15:10
Event linkView original entry

Ceph is a distributed storage platform that is a contender to become the future of software defined storage, providing unified access to block, object and file interfaces.
However like any complex systems there are various subsystems that may fail and analyzing logs is generally the first line of action.



This is where the ELK stack comes in; to search, analyze and process logs and metadata. From the Kraken release of Ceph, Ceph's Object Storage (Radosgw/RGW)
has integration for metadata search using ElasticSearch making it very easy to get much needed insights into how the object storage is being used for operators and users alike.

Ceph is a distributed storage platform that is a contender to become the future of software defined storage, providing unified access to block, object and file interfaces.
However like any complex systems there are various subsystems that may fail and analyzing logs is generally the first line of action.



This is where the ELK (ElasticSearch LogStash & Kibana, often referred to as "ELK stack" or "elastic stack") stack comes in; to search, analyze and process logs and metadata.
From the Kraken release of Ceph, Ceph's Object Storage (Radosgw/RGW) has integration for metadata search using ElasticSearch making it very easy to get much needed insights into how the object storage is being used for operators and users alike.



We will cover topics such as:
* Current status of ELK and Ceph
* Ceph logging and cluster log parsing with Logstash
* Future of the ELK for Analyzing and Alerting for Ceph
* RGW Metadata export to Elasticsearch - RGW Metadata Search
* A few interesting elasticsearch queries on object storage

Back

CephFS Gateways

Home

Speaker David Disseldorp
Supriti Singh
RoomH.2213
TrackSoftware Defined Storage
Time15:15 - 15:55
Event linkView original entry

This talk will cover how NFS-Ganesha and Samba can be used to export
CephFS to NFS and SMB clients such as Windows, macOS, etc.
Challenges, including active-active clustering, failover, cross-protocol
caching and access control lists will be discussed, with a look at
current and future solutions.

Back

How to backup Ceph at scale

Home

Speaker Bartłomiej Święcki
RoomH.2213
TrackSoftware Defined Storage
Time16:00 - 16:40
Event linkView original entry

In this talk I would like to share my experience with large-scale backups of RBD images in Ceph clusters at OVH.
I will talk about challenges that we have faced when developing such solution and provide some practical guidelines for everyone willing to implement similar solution.



This talk is intended for everyone who would like to be extra safe with their software defined storage.


Back

Reasons to mitigate from NFSV3 to NFSV4/4.1

Home

Speaker Manisha Saini
RoomH.2213
TrackSoftware Defined Storage
Time16:45 - 17:00
Event linkView original entry

This talk will cover the differences between NFSv3 and NFSv4.What’s the Problem with NFSv3 and How NFSv4 is better suited to a wide range of datacenter and high performance compute than its predecessor NFSv3.This talk will also covers the advantages of extended capabilities of NFSV4 i.e NFSv4.1 and NFSv4.2 and its support with Gluster NFS-Ganesha

NFS is a well known and venerable network protocol which a network abstraction over a file system that allows a remote client to access it over a network in a similar way to a local file system.
NFSv4 has been a standard file sharing protocol since 2003 which superseded NFS Version 3, but still NFSV4 has not been widely adopted.While there have been many advances and improvements to NFS, some IT organizations have choosen to continue with NFSv3.



This talk will cover the differences between NFSv3 and NFSv4.What’s the Problem with NFSv3 and How NFSv4 is better suited to a wide range of datacenter and high performance compute than its predecessor NFSv3.This talk will also covers the advantages of extended capabilities of NFSV4 i.e NFSv4.1 and NFSv4.2 and its support with Gluster NFS-Ganesha

Back

Why you should take a look at Rust?

Home

Speaker Antonin Carette
RoomH.2214
TrackRust
Time09:00 - 09:25
Event linkView original entry

Today, many new programming languages have been created in order to simplify at the extreme the art of programming (Golang), to boost the performance of their human readable application (Nim), to secure the way to make and run programs with zero-cost abstraction (Rust), or just to propose a new exiting way to write programs (Scala, Swift), etc.
Each of those new programming languages has its own features, goals, attendance, and community.



Today, it may become hazardous to develop with a new programming language, especially for a company or for a full-time personal project, due to the lack of stability, libraries, IDE features for this programming language, a too small or strict community, or many other obvious reasons.



The goal of this presentation is to introduce the concepts of Rust, its pros and cons, and how Rust differs from the others programming languages.
The target audience is mainly for developers who are reluctant to launch themselves into Rust, or project managers and CTOs who are looking for stable and exciting new technologies.



This talk is intended to emphasize Rust rationale, and will explain clearly and precisely why Rust wants to be the programming language of the next ten years.

Back

Idiomatic Rust

Home

Speaker Matthias Endler
RoomH.2214
TrackRust
Time09:30 - 09:55
Event linkView original entry

Rust is a big language and it gets bigger every day. Many beginners ask: "What is idiomatic Rust?".
This talk will highlight simple tips to make your Rust code more elegant and concise, and introduce you to my peer-reviewed collection of articles/talks/repos for writing idiomatic Rust code.

Coming from dynamic languages like Python, JavaScript or Ruby, many Rust beginners are missing some guidelines on how to write elegant and concise Rust code. For this purpose, I started a project called "Idiomatic Rust", which is a peer-reviewed collection of articles/talks/repos which teach the essence of good Rust.



In this talk I will introduce the project and show you some quick tips on how to make your Rust code more idiomatic. I will cover error handling (e.g. Option to Result conversions, the failure crate), efficiently working with (built-in) traits, and some more.

Back

Rust memory management

Home

Speaker Zeeshan Ali
RoomH.2214
TrackRust
Time10:00 - 10:25
Event linkView original entry

A quick introduction to the unique memory management concepts of Rust.

Rust is a systems programming language that focuses on safety and performance at the same time. Most people new to Rust, often struggle with memory management. The goal of this talk is to give a very quick overview of Rust's memory management.

Back

Introducing gtk-rs

Home

Speaker Guillaume Gomez
RoomH.2214
TrackRust
Time10:35 - 10:55
Event linkView original entry

The goal of this talk is to provide an introduction to the gtk bindings in Rust through the gtk-rs organization. It'll be mainly about how we made it and how we keep making it better.




Back

GStreamer & Rust

Home

Speaker Sebastian Dröge
RoomH.2214
TrackRust
Time11:00 - 11:25
Event linkView original entry

GStreamer is a highly versatile, cross-platform, plugin-based multimedia
framework that caters to the whole range of multimedia needs. It can be used
basically everywhere, from embedded devices like phones, TVs or drones to
desktop applications or on huge server farms.



This talk will focus on how and why Rust looks like the perfect programming
language for evolving GStreamer and provide a safer but still performant and
even more productive development environment than C.
Both GStreamer application development in Rust, and GStreamer plugin
development will be covered. What is possible today already, for which
applications can Rust be perfectly used already and which parts are still
missing? How does it feel like to write an application in Rust compared to
doing it in C? And how and why would one write GStreamer plugins in Rust to
extend the framework and all applications with support for new codecs, filters
or anything else?



Afterwards there will be a short outlook into the future of Rust in the
GStreamer project itself and for GStreamer application and plugin development.

Back

Introducing rust-av

Home

Speaker Luca Barbato
RoomH.2214
TrackRust
Time11:30 - 11:55
Event linkView original entry

Multimedia development is mainly done in C+assembly since speed is important and such combination of languages traditionally gives the best control over the hardware.



Rust is considered a mature system language that provides strong warranties about memory access (and more) without sacrificing runtime speed.



Multimedia libraries are plagued by classes of bugs that Rust actively prevents at compile time, thus this talk is about leveraging Rust to have a multimedia framework that is nice to use and at the same time more trustworthy.

Target audience is people with some Rust knowledge and some experience with multimedia libraries and concepts.

Back

Portable graphics abstraction in Rust

Home

Speaker Dzmitry Malyshau
Markus Siglreithmaier
RoomH.2214
TrackRust
Time12:00 - 12:25
Event linkView original entry

Graphics abstraction is an important part of maturing Rust ecosystem. gfx-rs has been the basis of many graphics applications since 2013, but as of this year it undergoes a total rewrite with the new vision, set of goals, and talented contributors. In this talk, I want to explain what this means to existing users, Mozilla, and the world.



Intended audience: people interested in Rust ecosystem foundational libraries, graphics and game development, Vulkan.

Back

Rusty robots

Home

Speaker Jorge Aparicio Rivera
RoomH.2214
TrackRust
Time12:30 - 12:55
Event linkView original entry

Are we embedded yet? I'd say yes! In this talk I'll show you how I programmed a self-balancing robot from scratch. I'll cover IO abstractions, motion sensors, motor drivers, filters, control stuff, bare metal multitasking, logging, etc. And I'll explain how some of Rust features made development easier and made the program more correct.

The talk will cover the following topics (as time allows):






I should note that I won't go in too much detail about the control engineering topics; just enough to motivate the design of the program.



The main takeaways of the talk will be:




Back

TiKV - building a distributed key-value store with Rust

Home

Speaker Siddon Tang
RoomH.2214
TrackRust
Time13:00 - 13:25
Event linkView original entry

It’s not an easy thing to build a modern Key-Value database which supports the distributed transaction, horizontal scalability, etc. But this is exactly what we are doing and we have built such a database from scratch using Rust. The database is named TiKV. In this talk, I will share how we use Rust to build the storage, to support replication across geographically distributed data networks, to implement an RPC framework, to inject failure for tests, and to monitor the key metrics of the whole cluster.

To build a distributed Key-Value store from scratch, we need to consider many things. In this talk, I will share with you the following experiences when we build TiKV.




  1. Why another database? The key features of a modern distributed Key-Value store: horizontal scalability, auto failover, transactional API, etc.

  2. How we build the TiKV core system, including the backend storage engine, the gRPC framework, the consensus replication mechanism, etc.

  3. How we use the failure injection test to guarantee data safety.

  4. How we monitor the cluster and diagnose the problems.

  5. The future plan.


Back

Qt GUIs with Rust

Home

Speaker Jos van den Oever
RoomH.2214
TrackRust
Time13:30 - 13:55
Event linkView original entry

Build a graphical application with Qt and Rust. Qt is a mature GUI library. Rust is a new, exciting and strict programming language. You can build most of your application logic in Rust and write the GUI in QML or Qt Widgets.



This talks will walk through how to do this with Rust Qt Binding Generator.

Back

Writing Node.js Modules in Rust

Home

Speaker Farid Nouri Neshat
RoomH.2214
TrackRust
Time14:00 - 14:25
Event linkView original entry

There's always some use cases where you need a systems language inside your node.js application. Neon is Rust bindings for writing safe and fast native Node.js modules. This talk is mainly about Neon, I'll go through the current state of project, a few examples, problems and also the future of the project.

Back

Demystifying Rust parsing

Home

Speaker Nikita Baksalyar
RoomH.2214
TrackRust
Time14:30 - 14:55
Event linkView original entry

Usually, the topic of parsing the Rust source code is associated with the Rust compiler itself, which for many is an uncharted territory. However, parsing by itself can (and should) be used out of the context of the Rust compiler: given the wealth of information that we can extract out of the code, we can do a lot of thing with it.



In this talk, we'll discuss several interesting applications for the Rust parser and the abstract syntax trees it produces, with practical examples of Mozilla bindgen (automatic generation of Rust library bindings based on C source code) and a Java binding generator written by the author for a large-scale open source library.

Some prior basic knowledge of compilers and parsing is expected.



The intended audience is Rust developers who want to learn more about the internal implementation of the Rust compiler and to practically apply this knowledge in their projects.

Back

rustfix

Home

Speaker Pascal Hertleif
RoomH.2214
TrackRust
Time15:00 - 15:25
Event linkView original entry

Rust programmers often seem happy to get compiler errors. Understandably so, the compiler is known to not just catch what would in other languages become a run time bug, but also to be quite helpful. On top of that, the clippy project adds more than 200 additional lints, to catch even more errors, but also help guide users towards writing more idiomatic code. This talk is about the dream of automatically fixing many of these errors (based on compiler-provided suggestions) with rustfix.

Back

Reaching const evaluation singularity

Home

Speaker Oliver Schneider
RoomH.2214
TrackRust
Time15:30 - 15:55
Event linkView original entry

The Rust interpreter miri has been merged into rustc to be its new const evaluator. This merge not only fixed various bugs in the old const evaluator, it opened up the avenue for many new features. Ever wanted to do a for loop in a constant? Want to parse a toml file into a static Config struct and report parsing errors as compile-time errors? Well, now you can do all that (pending RFCs for the details). In this talk I will present miri's design, its usage in the compile-time evaluator as well as future features that are enabled by it

Back

Rust - embedding WebAssembly for scripting

Home

Speaker Frank Rehberger
RoomH.2214
TrackRust
Time16:00 - 16:25
Event linkView original entry

Rust is associated with performance, memory safety and control of memory usage. Embedding dynamic runtimes such as for Lua or Javascript for dynamic scripting within the Rust-App would introduce an huge overhead. A WebAssembly-engine seems to be a good choice as compact and portable runtime environment. JIT-compiler may be used in future to transform WASM files to native code.



The talk will present WebAssembly technology and the benefits and pitfalls integrating it into a Rust-app. Small routines are implemented in C++/Rust and compiled to Wasm. The Rust-App is loading the wasm-code as plugin at runtime to execute dynamic tasks.

Back

Testing in Rust

Home

Speaker Donald Whyte
RoomH.2214
TrackRust
Time16:30 - 16:55
Event linkView original entry

ABSTRACT



Rust is designed for building low-level systems processes that are reliable and safe. Nevertheless, it is still important for developers to ensure their code is doing the right thing. To achieve this, Rust has a rich set of built-in testing tools for writing unit tests.



In this talk we cover general unit testing techniques for Rust. We will also demonstrate how to mock out complex dependencies using the double crate. Examples will range from simple cases to complex cases that you'll often see when testing real world systems.



The talk is suitable for both novice and experienced Rust developers, as well as non-Rust developers who are interested in learning more about the language.

DETAILED OVERVIEW



Rust is designed for building low-level systems processes that are reliable and safe. Nevertheless, it is still important for developers to ensure their code is doing the right thing. To achieve this, Rust has a rich set of built-in testing tools for writing unit tests.



When writing unit tests, we often need to mock dependencies that are complex to set up or access external resources (e.g. databases and APIs). Rust's heavy emphasis on generic metaprogramming and borrow semantics make mocking dependencies non-trivial. Thankfully, the double crate (https://github.com/DonaldWhyte/double) abstracts the complexities of mocking in Rust away from you.



In this talk we cover general unit testing techniques for Rust. We will also demonstrate how to mock out dependencies using double. Examples will range from simple cases to complex cases that you'll often see when testing real world systems. By the end of the talk, viewers will be able to:






This talk is suitable for both novice and experienced Rust developers, as well as non-Rust developers who are interested in learning more about the language.

Back

Upipe developers meeting

Home

Speaker Christophe Massiot
RoomH.3227
TrackBOFs (Track A - in H.3227)
Time12:00 - 13:00
Event linkView original entry

Upipe developers annual meeting.

Back

Blockchain developers meet&greet

Home

Speaker Martin Bähr
RoomH.3227
TrackBOFs (Track A - in H.3227)
Time14:00 - 15:00
Event linkView original entry

Meet and greet for blockchain developers. (Follows the talk on Monero in Janson.)

Back

Intro to the SDR Devroom

Home

Speaker Philip Balister
Martin Braun
Sylvain Munaut
RoomAW1.120
TrackSoftware Defined Radio
Time09:00 - 09:15
Event linkView original entry

Once again, we meet in Brussels, at FOSDEM, to talk about the state of free software SDR. The intro will recap the year, and give some updates on what's new and great in the SDR world.

Back

Recapping DARPA's First Big Hackfest

Home

Speaker Tom Rondeau
RoomAW1.120
TrackSoftware Defined Radio
Time09:15 - 09:45
Event linkView original entry

An overview of the DARPA Bay Area SDR Hackfest (darpahackfest.com).

In November of 2017, the United States Defense Advanced Research Projects Agency (DARPA) held its first large-scale Hackfest. The DARPA Bay Area SDR Hackfest focused on using software defined radio (SDR) to control unmanned aerial vehicles (UAV), otherwise known as drones. The week-long Hackfest involved a set of Missions that were tackled by eight teams selected earlier in the year, a series of speakers to give two talks per day, and an open Hacker Space to allow anyone who wanted room to hack, explore, and work on projects.



DARPA created this event to explore a number of questions. First, who else is working in the field of SDR and UAVs that might be able to contribute to the DARPA mission? What new ideas and innovations might prove significant to the future of SDR and UAV technology in the future? And how does free and open source software (FOSS) enable talented engineers and scientists to answer hard technical challenges quickly? The event produced interesting results, and in this talk, we will examine what came out of the Hackfest and think about the future purposes of these kinds of events and ways of engaging the developer communities.

Back

(Yet another) passive RADAR using DVB-T receiver and SDR.

Home

Speaker Jean-Michel Friedt
RoomAW1.120
TrackSoftware Defined Radio
Time09:45 - 10:15
Event linkView original entry

We demonstrate the use of affordable DVB-T receivers used as general purpose software defined radio interfaces for collecting signals from a non-cooperative reference emitter on the one hand, and signals reflected from non-cooperative targets on the other hand, to map the range and velocity in a passive radar application. Issues include frequency and time synchronization of the DVB-T receivers, mitigated by appropriate digital signal processing relying heavily on cross-correlations.

Passive radar uses existing non-cooperative emitters as signal sources for mapping non-cooperative target range and possibly velocity. The attractive features of this strategy is the lack of dedicated broadband source for RADAR application, low cost from the use of existing emitters, and stealth since the operator is undetectable. This measurement technique has become accessible to the amateur with the availability of low cost receivers ideally suited for software defined radio processing. In the framework of passive radar applications, two receivers must be synchronized to record simultaneously the reference channel and the signal reflected by the targets: cross correlation will then finely identify the reference signal delay in the measurement signal and allow for target identification. In the case of moving targets, a brute force approach similar to Doppler compensation in GPS acquisition is applied for the cross correlation to coherently accumulate energy: the range-Doppler maps hint at the distance to the target and its velocity. Most interestingly, in the latter context, clutter (signals reflected from static targets) is separated from the moving target which becomes well visible even in a complex environment.
In this presentation, we discuss the details of real time acquisition and signal post-processing for passive radar application, while addressing some of the challenges of diverting DVB-T receivers from their original application. While passive radar has been demonstrated with FM broadcast emitters, analog television emitters, or wifi, we shall here consider the broadband signal provided by digital terrestrial television broadcast signal.

Back

In the SpOOTlight: gr-radar

Home

Speaker Martin Braun
RoomAW1.120
TrackSoftware Defined Radio
Time10:15 - 10:45
Event linkView original entry

gr-radar is an out-of-tree module (OOT) for GNU Radio that came about during a Google Summer of Code. Let's take a look at this OOT together, see what it can do, and see what it could become!

In 2014, during the Google Summer of Code (GSoC), Stefan Wunsch took existing radar codes and freed them by publishing them as an OOT. The module was extended by several others, mostly students from the Comunications Engineering Lab at KIT, Karlsruhe, Germany. This presentation will take the toolbox and check out it's current capabilities, giving a short introduction into radar, the toolbox itself, and what it could become.

Back

Efficient implementation of a spectrum scanner on a software-defined radio platform

Home

Speaker Francois Quitin
RoomAW1.120
TrackSoftware Defined Radio
Time10:45 - 11:15
Event linkView original entry

One of the important tasks of national regulators is to detect abusive usage of the radio-frequency (RF) spectrum. This talk presents the implementation of an opportunistic spectrum scanner on a software-defined radio platform, whose aim is to continuously scan the RF spectrum and to detect whether any signals are present. The implementation is done on a USRP-N210 software-defined radio, a popular and cheap model. One major bottleneck of USRP-based implementations is the limited CPU computation power, which does not allow to process RF signals at high sample rates continuously. In the proposed implementation, most computation is done on the USRP FPGA, such that the host CPU is relieved and is only used to store data and coordinate the scanning. We present the details of the FPGA and the software architecture, as well as some experimental results that show the efficiency of the proposed spectrum scanner.

Back

An optimized GFDM software implementation for low-latency

Home

Speaker Johannes Demel
RoomAW1.120
TrackSoftware Defined Radio
Time11:15 - 11:45
Event linkView original entry

We present an open source GFDM implementation in GNU Radio. It is optimized for high throughput and low-latency. Future mobile communication standards in the realm of Industry 4.0 will require low latency waveforms in conjunction with high spectral efficiency. GFDM is a promising candidate to meet these requirements. Having an open source implementation helps researchers to experiment with this waveform early on in the process.

Back

DLR-CAFE: CUDA Filterbank Updates

Home

Speaker Jan Krämer
RoomAW1.120
TrackSoftware Defined Radio
Time11:45 - 12:10
Event linkView original entry

This talk is aimed at giving an update on my efforts to open source my CUDA Polyphase Filterbank library DLR-CAFE, the Satellite Networks Department Coprocessor Accelerated Filterbank Extension Library, developed at the German Aerospace Centre (DLR). It will contain a short introduction to the newest addition to the library, a Polyphase Filterbank Arbitrary Resampler. Following this, I will give a brief update on my progress to release the library into the Open Source wilderness.

Back

Physics, Math, and SDR

Home

Speaker Derek Kozel
RoomAW1.120
TrackSoftware Defined Radio
Time12:15 - 12:45
Event linkView original entry

This talk is about IQ imbalance, DC offsets, Noise Figure, and Intermodulation Distortion and how to simulate, measure, and, where possible, correct each in GNU Radio.

Back

Stupid Pluto Tricks

Home

Speaker Robin Getz
RoomAW1.120
TrackSoftware Defined Radio
Time12:45 - 13:05
Event linkView original entry

The ADALM-PLUTO SDR can be used as both a streaming (to GNU Radio device) as well as a standalone SDR, since it includes a single core ARM A9, and runs Linux inside. Although it requires closed source tools (Xilinx), they are zero cost, and with a little time, custom firmware images can be created, allowing the device to be a standalone radio platform.



This talk will show a few different things that can be done, including a ADS-B receiver, an instrument to measure Noise Pollution, a stealth RF capture device (to investigate RF), and a cell phone jammer detector.

The ADALM-PLUTO SDR can be used as a RF streaming device over USB 2 to Linux's Industrial I/O (IIO) clients like GNU Radio, or custom applications in C, C++, or python. These applications can also be run directly on the device, and the userspace library (libiio) abstracts the transport (USB, network, serial, or local) away from the application. With a single line code change, and a cross compiler, applications can move from being run on a host, or locally on the device. It's even easier if you are using devices like Raspberry Pi, since you don't need a cross compiler.



This talk will go through a few quick demos, showing (1) how to build custom firmware images; (2) how to track airplanes overhead; (3) How and why RF noise pollution is bad, and how it can be tracked/monitored; and (4) investigate portable cell phone jammers, that are being sold on the internet (for research purposes only). Five minutes will be spent on each topic, and code on github will be pointed to.

Back

The GNU Radio runtime

Home

Speaker Andrej Rode
RoomAW1.120
TrackSoftware Defined Radio
Time13:15 - 13:45
Event linkView original entry

In the last decade the GNU Radio project and its ecosystem has grown a lot. Unfortunately not all parts of the project have got attention equally. A lot of new and missing functionality can be found in so-called Out-of-Tree modules (OOTs).
The struggle to add changes to OOTs which are incompatible to the GNU Radio core is real. The GNU Radio runtime needs some fresh ideas so new development with GNU Radio will be possible and rewarding. This presentation will try to give an quick overview about the current structure and inner workings of the GNU Radio runtime and take a dive into how a new runtime can be implemented to make GNU Radio future-proof.

Back

C++ Code Generation with GRC

Home

Speaker Håkon Vågsether
RoomAW1.120
TrackSoftware Defined Radio
Time13:45 - 14:10
Event linkView original entry

This talk introduces C++ output functionality for the GNU Radio Companion, which was my ESA Summer of Code in Space project this summer.

Back

LoRa Reverse Engineering and AES EM Side-Channel Attacks using SDR

Home

Speaker Pieter Robyns
RoomAW1.120
TrackSoftware Defined Radio
Time14:15 - 15:00
Event linkView original entry

LoRa is a novel wireless modulation scheme designed for low data rate, low-power and long-range communications. In this presentation, we will discuss the various processing stages taking place on the LoRa PHY layer, including coding, whitening, interleaving, modulation and preamble detection. We will subsequently learn how hardware LoRa radios can be reverse engineered in order to build our own LoRa decoder with GNU Radio and software defined radios. The concept of PHY-layer fingerprinting will also be briefly explained, showing how we can identify individual LoRa radios using only their raw radio signals and a neural network. Finally, we will see how software defined radios can be leveraged to perform electromagnetic side-channel attacks on the AES encryption scheme, which is used by LoRa and various other wireless protocols. Such attacks enable the recovery of an unknown secret key given a set of known plaintexts and proximal measurements of the electromagnetic spectrum taken during the encryption process.

Back

Intro to Open Source Radio Telescopes

Home

Speaker Martin Braun
Sue Ann Heatherly
RoomAW1.120
TrackSoftware Defined Radio
Time15:00 - 15:30
Event linkView original entry

The Open Source Radio Telescope initiative is a group of educators and radio astronomy enthusiasts, trying to bring the world of radio astronomy closer to the people. We provide projects and education to anyone who wants to join.
Find more about the project at: http://opensourceradiotelescopes.org/

Back

Free your Weather Station!

Home

Speaker Ray Kinsella
RoomAW1.120
TrackSoftware Defined Radio
Time15:30 - 15:50
Event linkView original entry

Short presentation on hacking the Oregon Scientific WeatherStation with a proprietary interface.
To free the data within.

Short presentation on hacking an Oregon Scientific WeatherStation with a proprietary interface.
The Weather Station itself was provided with a USB port, with some clunky Microsoft Windows Software to dump the data to CSV.
Pulling the data directly out of the Weather Station by the port provided was not going to be an option.



Instead Open Source software came to the rescue; in the form of a Arduino Sketch implementing Manchester Encoding with a €2 RXB6 module.
With this, I was able to intercept the 433 Mz Sensor Data and upload to the internet with WeeWX.



This is a story of a €12 investment and Open Source, opening a World of Weather Data.

Back

Claim Space, the Libre Way, using SDRs

Home

Speaker Manolis Surligas
RoomAW1.120
TrackSoftware Defined Radio
Time16:00 - 16:30
Event linkView original entry

So many things happened from the FOSDEM 2017 in the SatNOGS project. The successful deployment and data reception of the UPSat satellite, high power amateur rocket launches and realtime telemetry decoding, automatic decoding of LEO satellite signals from our crowd-sourced network of ground stations are some of the milestones achieved. And all these with the help of Software Defined Radios!



In this talk, we present briefly the UPSat and the High Power Rocketry project and then the version 3 of the SatNOGS project. In the new version we have developed several decoders that operate in realtime during a LEO satellite pass. Combined with the SatNOGS rotator that tries to point as accurate as possible to the satellite trajectory, the decoder searches for transmitted frames. The successfully decoded frames are then uploaded automatically on the SatNOGS database for visualization and further analysis.



Currently, SatNOGS project incorporates decoders for many popular in LEO satellite missions transmission schemes such as AFSK1200, APRS1200, (G)FSK9600, APRS9600, APT, DUV, CW, LRPT using the GNU-Radio toolkit. In addition, decoders for APT and LRPT are producing on the fly the weather images transmitted by the corresponding meteorological satellites. For the most interesting of the available decoders, we present the design and the implementation details among with a live decoding demo from a IQ captured observation.



All of the available decoders are running in realtime on the RPi3 platform. To achieve this, several design decisions were made (RF performance vs realtime operation) and we discuss some of them.

Back

BYOR: Bring-your-own-radio hacking session

Home

Speaker Martin Braun
RoomAW1.120
TrackSoftware Defined Radio
Time16:30 - 17:00
Event linkView original entry

This will be a FOSDEM first: Instead of a panel, this year's interactive time slot will be a bring-your-own-radio hack session. Bring your stuff, show us what it can do! We will have the mic open for demonstrating, but you can also huddle (as far as the room permits) around your devices and show them off.

Back

Zonemaster

Home

Speaker Sandoche Balakrichenan
RoomAW1.121
TrackDNS
Time09:00 - 09:20
Event linkView original entry

DNS is the backbone of the Internet. When one is not able to access to a content (such as a website), the first thing to do is to verify the DNS connectivity. This talk will provde an overview of an open source DNS checking tool called "Zonemaster" (https://github.com/dotse/zonemaster), developed and maintained by Afnic (www.afnic.fr) and IIS (www.iis.se).
The talk will further delve into the architecture, different components (Engine, API, GUI, CLI) and its usages for different end-users such as general users, Companies operating DNS, Companies having a DNS portfolio of domain names and DNS geeks.

Back

Repairing DNS at TLD scale

Home

Speaker Petr Černohouz
RoomAW1.121
TrackDNS
Time09:25 - 09:45
Event linkView original entry

For DNS stability is important to delegate domains to correctly configured servers, but condition can change over the time. We are checking all 1.3 milions of domains regulary and try to poin on common mistakes. Presentation also show, how we deal with term "correct configuration" for atuhoritative DNS server

Back

BIND 9 Past, Present, and Future

Home

Speaker Ondřej Surý
RoomAW1.121
TrackDNS
Time09:50 - 10:10
Event linkView original entry

BIND 9 is now 17 years old, the latest stable version 9.12 was releases in December and the BIND 9 Team has adopted changes to adapt to the ever change Internet landscape.

In this talk, I will present BIND 9 colorful past, the current state of development, and the changes that BIND 9 Team has adopted to cope with modern DNS. I will talk about the changes in the development model, release cycles, and also about the planned features that will help BIND 9 Team to be more nimble in adding new features, fixing old issues, spending less time on maintenance, while ensuring the stability for existing users. I believe that existing BIND 9 users will be thrilled.

Back

Blame (and) DNS: debugging tutorial

Home

Speaker Petr Špaček
RoomAW1.121
TrackDNS
Time10:15 - 10:45
Event linkView original entry

How to find out who, where, and how broke your DNS resolution? What support line to call?



These are hard questions because today's DNS is very complex and even simplest query for an IP address might involve dozen different parties. In this tutorial we will walk though typical scenarios and use common tools to find out why things do not work as we expect and who to contact.

Format of the talk is presentation with screencast from live debugging sessions.
Attendees are encouraged to bring own laptops with working network, pre-installed command-line tools dig, delv, drill, ping, and a web browser. This might be handy if you want to play with the tools during the talk.

Back

Living on the Edge

Home

Speaker Willem Toorop
RoomAW1.121
TrackDNS
Time10:50 - 11:10
Event linkView original entry

To improve system security to the next level, the DNS services used by applications at the end-point need recent standards implemented. Currently typical use of DNS by applications is limited to forwarding requests to the system's stub resolver, which in turn simply forwards it to the recursive resolver in the network which does all the heavy lifting; iterate over the authoritatives, secure the lookups with DNSSEC etc.  This first-mile in the DNS eco-system (from application, via stub to the recursive resolver) is completely insecure and exposed. This makes the use of DANE (a secure alternative to the flawed CA based PKIX) impossible and also leaves end-users unprotected to DNS based connection hijacking and very exposed to eavesdropping attacks!



To deliver DANE and Privacy to the end-user, we need to address these issues as close to the end-user as possible. The getdns library is a resolver library for applications, providing a versatile stub resolver that takes care of all the difficulties and complexities that arise when these higher security and privacy demands need to be practiced for actual users instead of just networks.

For both DANE and DNS Privacy, stub resolvers need to be able to reliably establish the authenticity of data at the remote end. This alone already involves DNSSEC and/or PKIX validation, and might also involve DNSSEC roadblocks avoidance, discovery and anticipating IPv6-only networks (DNS64/NAT64), and reliable trust requirements maintenance (i.e. the KSK rollover). Furthermore, to correctly perform DANE, applications need to learn from the stub resolver the status of the authentication result.



In the course of the presentation the following topics will be covered:



  * the current techniques stub resolvers utilise to reliably do DNSSEC,
  * the changing architectural role of the stub-resolver system-component (Stub as daemon, as library, as both, dbus interface, nsswitch module),
  * the stub resolver specific challenges with the root KSK rollover, and
  * the implementation status of different stub software with respect to the bullet points above.

Back

DNSSEC for higher performace

Home

Speaker Petr Špaček
RoomAW1.121
TrackDNS
Time11:15 - 11:30
Event linkView original entry

"Security slows down everything." Or not? This talk will explain how aggressive use of DNSSEC-validated cache (aka RFC 8198) boosts DNS performance, and why signing your own domain can provide higher security and performance at the same time.

Back

Melting the Snow

Home

Speaker Olivier van der Toorn
RoomAW1.121
TrackDNS
Time11:35 - 12:00
Event linkView original entry

Snowshoe spam is a type of spam which is hard to detect. This is because the spammer tries to spread out the sending load over many host in order to evade detection.



Our method combines active DNS measurements with Machine Learning to detect snowshoe spam domains with a time advantage over regular methods.

Snowshoe spam is a type of spam that is notoriously hard to detect. Anti-abuse vendors estimate that 15% of spam can be classified as snowshoe spam. Differently from regular spam, snowshoe spammers distribute sending of spam over many hosts, in order to evade detection by spam reputation systems (blacklists). To be successful spammers need to appear as legitimate as possible, for example, by adopting email best practices, such as the Sender Policy Framework (SPF). This requires spammers to register and configure legitimate DNS domains.



Many previous studies have relied on DNS data to detect spam. However, this often happens based on passive DNS data. This limits detection to domains that have actually been used and have been observed on passive DNS sensors.



To overcome this limitation, we take a different approach. We make use of active DNS measurements, covering more than 60% of the global DNS namespace, in combination with machine learning to identify malicious domains crafted for snowshoe spam. Our results show that we are able to detect snowshoe spam domains with a precision of over 93%.



More importantly, we are able to detect a significant fraction of the malicious domains up to 100 days earlier than existing blacklists, which suggests our method can give us a time advantage in the fight against spam. In addition to testing the efficacy of our approach in comparison to existing blacklists, we validated our approach over a 3-month period in an actual mail filter system at a major Dutch network operator. Not only did this demonstrate that our approach works in practice, the operator has actually decided to deploy our method in production, based on the results obtained.

Back

DNS privacy, where are we?

Home

Speaker Stéphane Bortzmeyer
RoomAW1.121
TrackDNS
Time12:05 - 12:35
Event linkView original entry

The DNS privacy project started in november 2013 at the IETF meeting
in Vancouver, following Snowden's revelations. Where are we today? We
have a problem statement (RFC 7626), standard solutions (QNAME
minimisation, DNS over TLS), running code (such as the getdns library)
and actual deployments (such as the Quad9 public resolver). The talk
will examine the current state of the project. It is intended for
people who have a general knowledge of DNS, but you don't need to be
an expert.

Unlike HTTP and Web privacy, the issue of privacy for DNS users was never a hot topic. There are no specific rules or regulations about it, and the typical Data Protection Agency, GDPR or not GDPR, is not too interested in the subject. But DNS traffic can be very revealing and has already been used to identify things such as malware communicating with a C&C. If DNS surveillance can be done for the good, it can certainly also be done for evil purposes.



This is what motivated the Internet Engineering Task Force to start a work at the Vancouver meeting in november 2013, with a more official start at the London meeting in march 2014. The project followed the classical steps: describing the problem, the threat model, the actual rissk (this is now documented in RFC 7626), then trying to find solutions. While many geeks, when asked about privacy, immediately scream "encryption", privacy actually requires TWO things: encryption to protect against third parties, AND data minimisation, to protect you against the servers you talk to. Hence the development of two solutions, encryption with TLS (RFC 7858) and QNAME minimisation (RFC 7816).



There is also running code. The Unbound DNS resolver already knows how to encrypt (upstream and downstream), and can also perform QNAME minimisation, like the Knot resolver. The excellent getdns library also speaks DNS-over-TLS, allowing things like a monitoring plugin to monitor your DNS-over-TLS servers. Android has now DNS-over-TLS in its code base.



And there are some public deployments with this technology. The OpenWrt router Turris Omnia already ships, by default, with QNAME minimisation enabled. The Quad9 public DNS resolver accepts DNS-over-TLS.



What can we expect in the future? There are some projects to allow encryption between resolver and authoritative server (RFC 7858 only cobvers the stub-to-resolver case), to add padding to more TLS requests (getdns and Knot already do it), but most of the work will probably be on the code and deployment.

Back

DNS-based discovery for OpenID Connect

Home

Speaker Marcos Sanz Grossón
RoomAW1.121
TrackDNS
Time12:40 - 12:55
Event linkView original entry

OpenID Connect is a widely deployed standard to implement single-sign-on in the web. While the existing protocol discovery mechanisms might be well-suited for the current social media login deployment status (that is, a handful of islands of identity providers and Facebook&Google coping with 90%+ of the market share), a better mechanism would be needed for a real federated, distributed environment.

This lightning talk tries to present the ideas outlined in
https://tools.ietf.org/html/draft-sanz-openid-dns-discovery-00
together with a working demo, looking for feedback from the DNS developer community.

Back

Welcome to the Retrocomputing DevRoom

Home

Speaker Pau Garcia Quiles (pgquiles)
RoomAW1.121
TrackRetrocomputing
Time13:20 - 13:25
Event linkView original entry

Back

DOSEMU and FreeDOS: past, present and future

Home

Speaker Bart Oldeman
RoomAW1.121
TrackRetrocomputing
Time13:30 - 14:00
Event linkView original entry

DOSEMU (1992) and FreeDOS (1994) are long-running projects which allow you to run DOS applications on Linux and on bare hardware. I will describe their history and current developments.
Intended audience: anyone who is interested in DOS and emulation.

I will describe past and current developments relating to DOSEMU and FreeDOS. This includes:
* The Dosemu2 project, a continuation of DOSEMU coordinated by Stas Sergeev.
* Where does DOSEMU fit in among other solutions such as DOSBOX and QEMU?
* Use of KVM within Dosemu2 to allow running DOS applications at near-native hardware speed on CPUs that no longer support the vm86() syscall.
* FreeDOS 1.2 and an outlook to FreeDOS 2.0.
* The GCC IA-16 port with support for far pointers by Rask Ingemann Lambertsen, Andrew Jenner and TK Chia. This finally allows us to compile FreeDOS components using a Free compiler, instead of Open Watcom which has more restrictive licensing terms.

Back

Developing software on ORIC microcomputers

Home

Speaker François Revol
RoomAW1.121
TrackRetrocomputing
Time14:05 - 14:35
Event linkView original entry

The ORIC SDK by Defence Force is used to write demos and games for the ORIC range of microcomputers. Along with the Oricutron emulator, they make it possible to cross-develop C and asm programs for these machines.
We'll show you how.

We'll show installation and usage of the OSDK.

Back

Retro-uC

Home

Speaker Staf Verhaegen
RoomAW1.121
TrackRetrocomputing
Time14:40 - 15:10
Event linkView original entry

Presentation about a micro-controller with the proper instruction set for the retro enthusiast and a pilot project to testing the grounds for an open silicon movement.

The Retro-uC is an fully open source microcontroller. It uses CPU cores that are battle tested in FPGA re-implementations of vintage computer systems. It implements the MOS6502 instruction set of Commodore 64 fame, the Z80 one of the ZX Spectrum and the Motorola 68000 of Amiga and Atari ST fame. For development, implementations running on FPGA boards are available and are available for interested retro people to use for their own development or as microcontroller. The final goal of the Retro-uC porject is to produce an ASIC by a crowdfunding campaign.



After giving an overview of the features and choices made like the memory map the author would like to get freedback from this talk on the compatibility of low volume chip production and retrocomputing.



This presentation will only very briefly discuss the implementation of the chip itself; that will be discussed more in depth in a presentation in the CAD and Open Hardware devroom.

Back

NetBSD - A modern operating system for your retro battlestation

Home

Speaker Sevan Janiyan
RoomAW1.121
TrackRetrocomputing
Time15:15 - 15:45
Event linkView original entry

The NetBSD operating systems focus on portability means that over the years
since its inception it has ammassed support for a large body of platforms across
many CPU architectures with continued support to this day.

The NetBSD operating systems focus on portability means that over the years
since its inception it has ammassed support for a large body of platforms across
many CPU architectures with continued support to this day.



Through this effort to provide extensive support many features & sub projects
have developed to acomodate supported hardware.
Everything from a Sun2 workstation to a Sega Dreamcast to an Amiga and many
others. This talk will cover the details of these features which ease supporting
such systems.



No system newer than 10 years old will be covered!




Back

Game development for the ColecoVision and Sega 8-bit systems

Home

Speaker Philipp Klaus Krause
RoomAW1.121
TrackRetrocomputing
Time15:50 - 16:20
Event linkView original entry

The ColecoVision and Sega Master System are popular video game systems of the 1980s. The central part of the free toolchain is the Small Device C Compiler (SDCC), which features some optimizations particularly suited to the irregular architectures, allowing SDCC to generate efficient code for the Z80.
Hardware similarities between the ColecoVision and the Sega 8-bit systems allow to target both in game development, which is supported by cross-platform libraries.

The ColecoVision and Sega Master System are popular video game systems of the 1980s.
When they appeared, game development was limited to commercial developers and usually done in assembler only.
Today, free tools allow anyone to develop for them with relative ease, also in C. The ColecoVision and Sega Master System are popular video game systems of the 1980s. The central part of the free toolchain is the Small Device C Compiler (SDCC), which features some optimizations particularly suited to the irregular architectures, allowing SDCC to generate efficient code for the Z80 (and some other architectures relevant to retrogaming, such as the CPU used in the Game Boy). SDCC is also used as part of larger retrocomputing / -gaming tools, such as z88dk and 8bit Workshop.
Besides all being Z80-based, the ColecoVision and the Sega 8-bit systems also use similar graphics and sound chips. This allows to target the simultaneously in game development, which is supported by cross-platform libraries.
Some new games have been released on cartridges. Sometimes cartridges from old games are reused, but there are also new PCB designs and newly made game cartridges.

Back

ZX Spectrum in the New Millenium

Home

Speaker Rui Martins
RoomAW1.121
TrackRetrocomputing
Time16:25 - 16:55
Event linkView original entry

Present the challenges faced to work around the limitations of the existing ZX Spectrum Cartridge interface (limited by definition to 16KB), in order to be able to have up to 512KB using FLASH memory.

First present the existing limitations of the Sinclair Interface2 Cartridge hardware implementation.
Second, document the problems and challenges presented by it, when trying to use it for something more than a Read Only Memory (ROM) with 16KB
Third, document the side band communication implemented in order to transfer information to a smarter Cartridge, that implements a paged memory interface.
Fourth, Demonstrate a working sample/prototype (either live, using a real Spectrum or in video format). Check video links.

Back

IoT DevRoom Opening

Home

Speaker
RoomAW1.125
TrackInternet of Things
Time09:30 - 09:45
Event linkView original entry

IoT DevRoom opening

Overview of the day

Back

Turning On the Lights with Home Assistant and MQTT

Home

Speaker Leon Anavi
RoomAW1.125
TrackInternet of Things
Time09:50 - 10:15
Event linkView original entry

In this presentation you will learn the exact steps for using MQTT JSON Light component of the open source home automation platform Home Assistant for controlling lights through the machine-to-machine protocol MQTT. Practical examples for low cost devices combining together open source hardware with free and open source software will be revealed.
The presentation will provide general overview of Home Assistant, details about the software integration of new devices to it through the MQTT protocol and open source MQTT brokers such as Mosquitto. We will do a code review of an open source Linux daemon application for Raspberry Pi, written in the C programming language and based on the Paho library for MQTT client and the piGPIO library used for pulse-width modulation (PWM) control of a RGB LED strip. We will compare it to an implementation of the same features for the microcontroller with WiFi ESP8266 written as a sketch for the Arduino environment. Furthermore, the presentation will include details about reading data from various sensors and their setup in Home Assistant.

Home Assistant is a popular open source home automation platform written in Python 3 and perfect to run on a Raspberry Pi. Out of the box is supports popular mass market Internet of Things such as IKEA Trådfri, Philips Hue, Google Assistant, Alexa / Amazon Echo, Nest, KODI and many more.
Furthermore Home Assistant provides components for easy integration of Internet of Things through the machine-to-machine protocol MQTT. This presentation will focus on practical examples for using the MQTT JSON Light component for integrating two type of devices controlling 12V RGB LED strips: Raspberry Pi with the open source hardware add-on board ANAVI Light pHAT and the another open source hardware devices with ESP8266 - the cheap WiFi microcontroller compatible with the Arduino IDE. The printed circuit boards (PCB) of both hardware devices used in the examples are designed with free and open source software KiCAD that runs on GNU/Linux distributions.
The focus of the presentation will be on the open source software that implements an MQTT client, connects to an open source MQTT broker such as Mosquitto and controls the lights of RGB LED strip through PWM. The exact steps for the integration of new devices in Home Assistant using MQTT will be revealed in details.
The presentation is appropriate for open source enthusiasts interested in home automation, engineers, students and even beginners. No previous knowledges about Home Assistant or MQTT is required.

Back

Accessing your Mbed device from anywhere using Pagekite

Home

Speaker Bert Outtier
RoomAW1.125
TrackInternet of Things
Time10:25 - 10:50
Event linkView original entry

When looking at home automation solutions available in the market nowadays,
one of the most important and expected features is to be able to control your
home automation installation from anywhere in the world using a smartphone
app. A vendor of a low-cost home automation solution requested us to add such
a feature to their existing IP gateway product, which only allowed for users
to control their home automation system with their smartphone while they are
connected to their local network at home. We were asked to make it possible to
let the smartphone app connect to the IP gateway from anywhere in the world.
This vendor's IP gateway hard- and software was based on the Mbed platform, so
they needed a solution that could fit within Mbed.



First of all, there are a lot of ways to make a device in a local network
accessible from anywhere in the world: SSH tunneling, port forwarding with
dynamic DNS,... Since our client wanted an open-source, secure, low-cost and
easy to set up solution that he could host himself, we opted to go for
Pagekite. However, since Mbed does not support OpenSSL, Linux sockets or
libev, the existing libpagekite C library was not an option to start from. So
we started to implement a Mbed flavour of the library ourselves, and decided
to make it open-source

Back

Home automation - Not as simple as you think

Home

Speaker Steven Goodwin
RoomAW1.125
TrackInternet of Things
Time11:05 - 11:30
Event linkView original entry

Home automation, like most fields in IoT, appears simple. A few lines of code, a couple of data points, and the job appears done. However, integrating computer code into a human life is not so simple. This talk highlights the personal issues involved in this area of IoT software development, and why the code you develop in the lab is often unsuitable for the home. "Simple" functionality like picking a random song, decreasing the volume, and playing a movie take on hidden complexities, that will be uncovered.

Steven Goodwin has lived in the automated home he's been building for the last 15 years. The Minerva project contains much of that code, but that's only the start. On one hand it needs some very specific configurations (e.g. which train stations do you use, so it can guestimate your arrival time, and therefore know when to control the heating or put the kettle on), and on the other it needs custom code to reflect your lifestyle. This latter point extends into the conversation of self-awareness of your own living patterns, and how they can be quantified.



He'll also comment on AI and service platforms (like Nest) which contain lessons for all IoT products.

Back

Mirai and Computer Vision

Home

Speaker Michael Schloh von Bennewitz
RoomAW1.125
TrackInternet of Things
Time11:45 - 12:10
Event linkView original entry

In this demonstrative session we learn the ropes of IoT nodes with computer vision. We develop a web camera application, consider security, and store or forward data for image processing.

Combining a number of components to make a full IoT system solves end to end use cases like secure image processing via webcams. With the limited processing power and storage of embedded systems, we learn to move data across networks, routing via gateways, and messaging to cloud applications that provide back end processing. To this end, we'll put a number of pieces of hardware in operation and take a cookie cutter approach to conclude with a system consisting of:






Key learnings include:






Hardware will be demonstrated in this typically hands on (but adapted to room conditions) session. Novice users are welcome, and we will cut and paste source code with step by step instruction maintaining a level of learning and fun.

Back

The IoT botnet wars, Linux devices, and the absence of basic security hardening

Home

Speaker GregDiStefano
RoomAW1.125
TrackInternet of Things
Time12:25 - 12:50
Event linkView original entry

This talk will cover the ongoing battle being waged is leveraging insecure Linux-based Internet of Things (IoT) devices. BrickerBot is an example of a recent malware strain attacking connected devices and causing them to “brick,” making an electronic device completely useless in a permanent denial-of-service (PDoS) attack.



Additionally, the Mirai botnet consisted of connected printers, IP cameras, residential gateways, and baby monitors that flooded DNS servers. Mirai was behind the largest DDoS attack of its kind ever in October 2016, with an estimated throughput of 1.2 terabits per second. It leveraged these enslaved devices to bring down large portions of the internet, including services such as Netflix, GitHub, HBO, Amazon, Reddit, Twitter, and DIRECTV. BrickerBot’s goal appears to counter Mirai’s: Bricking insecure Linux devices so that malware such as Mirai can’t subjugate these devices in another DDoS attack. We will take an in-depth look at the anatomy of the attack.



We will then dive into basic some security hardening principles which would have helped protect against many of these attacks. Some of the fundamental security concepts we will cover include:



Closing unused open network ports
Intrusion detection systems
Enforcing password complexity and policies
Removing unnecessary services
Frequent software updates to fix bugs and patch security vulnerabilities

Speaker Bio:



Greg Di Stefano works as a Software Developer on the open-source Mender.io project: a embedded Linux OTA updater. He has a keen interest in security, and has given talks previously at Embedded World and Embedded Linux conference.

Back

A Guided Tour of Eclipse IoT: 3 Software Stacks for IoT

Home

Speaker Benjamin Cabé
RoomAW1.125
TrackInternet of Things
Time13:05 - 13:30
Event linkView original entry

Whether you’re looking at the constrained devices that make for the "things" of the IoT, gateways that connect them to the Internet, or backend servers, there’s a lot that one needs to build for creating end-to-end IoT solutions. In this session, we will look at the typical software features that are specific to IoT, and see what’s available in the open source ecosystem (and more specifically Eclipse IoT) to implement them.
A live demo of the Eclipse IoT Open Testbed for Asset Tracking will allow the audience to see some of the projects (such as Eclipse Kura, or Eclipse Kapua) in action.

Back

Tizen:RT

Home

Speaker Philippe Coval
RoomAW1.125
TrackInternet of Things
Time13:45 - 14:10
Event linkView original entry

Overview of Tizen ecosystem, focus on constrained devices support, how to get started with Tizen:RT plus demo on actual device.

The Tizen software platform has been designed to target consumer electronics,
since 2013 the OS is powering many products on the market
(from smart watches to TVs, cameras or even white goods).



Even if this Linux based platform is very flexible,
the Linux kernel has minimum size requirements,
so Tizen can't be deployed on constrained devices
(ubiquitous microcontrollers).



To also target low end devices part of Tizen's technology was rebased on NuttX RTOS.
Seamless connectivity is still provided by IoTivity,
while a new IoT features are becoming available to application developers too,
this whole stack is Tizen:RT!



This presentation will give an overview of Tizen ecosystem,
and explain how to get started with Tizen:RT using QEmu, SDK,
finally an IoT scenario will be demonstrated using trusted system on module
ARTIK 055s.



Reference: https://wiki.tizen.org/Category:RT

Back

FOSS Platform for Cloud Based IoT Solutions

Home

Speaker Steffen Evers
RoomAW1.125
TrackInternet of Things
Time14:25 - 14:50
Event linkView original entry

The Internet of things (IoT) is expected to connect billions of devices. The demand for an open IoT platform technology is increasing to enable and accelerate the development of cross-domain/cross-vendor use cases and face the accompanying challenges like connectivity with a wide range of heterogeneous protocols and large scale messaging. Eclipse IoT on top of today's cloud technology like Kubernetes is a promising software stack for this goal. However, it will only be successful if it is widely adopted. This talk describes the status quo and gives an outlook for future development.

It is expected that in the next years billions of devices will be connected to the internet of things (IoT). Many of them will interact with cloud-based solutions to provide additional services on the devices or in the web. To bring IoT to the next level technologies for supporting cross-domain/cross-vendor solutions are needed. There is already a lot FOSS available to provide a technological base for building IoT solutions (e.g. Kubernetes). However, on top of it, software is needed for the connectivity challenges, support of domain-specific protocols, large scale messaging and device management and integration with existing infrastructure. Eclipse IoT aims to address these needs and provide an FOSS IoT framework that makes IoT development fast and simple. In the last year Eclipse IoT has made a lot of progress and the underlying environment in cloud technology has seen a lot of changes. In addition, upcoming challenges like automated driving and connected vehicles have resulted in new projects for better support for the automotive domain. This talk gives you an overview of major Eclipse IoT projects and illustrates its capabilities with a short demo.

Back

IoT.js - A JavaScript platform for the Internet of Things

Home

Speaker Ziran Sun
RoomAW1.125
TrackInternet of Things
Time15:05 - 15:30
Event linkView original entry

IoT.js - A JavaScript platform for the Internet of Things



IoT.js is a javascript platform that aims to provide inter-operable services for IoT world. Powered by JerryScript, an ultra-lightweight modular JavaScript engine, the platform is designed to bring the success of Node.js to constrained IoT devices. To address interoperability, IoT.js has provided a Node.js friendly architecture and comes with a subset of Node.js APIs. Since Samsung OSG first presented IoT.js in FOSDEM in 2016, the platform has been through a rapid growth in last couple of years. With a lot active high-quality contributions from the IoT.js and JerryScript open source community, IoT.js has released version 1.0 in July 2017 which presented a rich set of features, hardware and tool supports for developers. In this talk, we are looking at recent developments in IoT.js and share our vision for future plans. The talk is supported by a demo of iot.js running on constrained device seamlessly connects to node.js for third party cloud access.

Back

CANCELLED The dark side of Internet of things

Home

Speaker Dipesh Monga
RoomAW1.125
TrackInternet of Things
Time15:45 - 16:10
Event linkView original entry

With the advent of the Internet of things, monitoring and controlling everything such as coffee maker, lights, TV, Fridge,etc. over the internet has become a child's play. But are we really making our lives simpler or diving ourselves in a vast ocean which is getting deeper and deeper? In today's world where the security of our data of a major concern, the number of websites are always tracking what we search for, what we watch, our location and now when things are limited to only data, adding another dimension i.e. physical entities is really a big question.



From this talk audience will take away an understanding of the privacy concerns related to IoT, and how they may be putting their personal information at risk by connecting my physical entities to the internet. Is it really safe to connect things to the internet?

Talk Structure:



General Discussion on What Is IoT and its future 

The pros and cons of connecting things to the internet

How exploiters can breach the security and know our lifestyles

With a smart home rigged with cameras, can provide an intruder with victims' various details including lifestyles and visible passwords.That might result in identity theft with all the access codes including the biotech passwords.

Would suggest the methods for handling the security threats with channelled password inputs and multi-level user verification implementation.

Implementing basic IoT attacks: accessing the devices using physical, remote or local netowork

During the session, we would implement some basic IOT attacks (eg. a quadcopter with development board mounted to demonstrate sniffing attacks)

Live demo & session on increasing security: Participants will learn how to increase the security and bypass few of these attacks.

Taking steps beyond: We would also discuss the mechanism that can be adopted to ensure internet privacy.
Back

OSS-7: an opensource DASH7 stack

Home

Speaker Glenn Ergeerts
RoomAW1.125
TrackInternet of Things
Time16:25 - 16:50
Event linkView original entry

In this talk we will introduce how the flexibility of the DASH7 Alliance Protocol can be used to solve a lot of IoT use cases. Starting from a simple embedded application which measures sensor values we will demonstrate how the full stack approach of DASH7 enables you to (re-)configure the behavior of the network stack and application, over the air, without touching the application code. Different communication schemes will be discussed together with the trade-offs for aspects like energy consumption and down-link latency. We show how the simple but powerful API (based on file operations) allows to approach a network like a distributed database which you can address based on data queries instead of using node addresses. Finally, we will also discuss the implementation of OSS-7, related tools and the road forward.

Back

Intro Geospatial devroom

Home

Speaker Johan Van de Wauw
RoomAW1.126
TrackGeospatial
Time09:00 - 09:05
Event linkView original entry

Back

Join the FREEWAT family

Home

Speaker Pieter Jan Haest
RoomAW1.126
TrackGeospatial
Time09:05 - 09:30
Event linkView original entry

FREEWAT (FREE and open source software tools for WATer resource management) is an HORIZON 2020 project financed by the EU Commission under the call WATER INNOVATION: BOOSTING ITS VALUE FOR EUROPE. FREEWAT's main result is an open source and public domain GIS integrated modelling environment (the FREEWAT platform) for the simulation of water quantity and quality in surface water and groundwater with an integrated water management and planning module. The modelling environment is designed as a composite plugin in QGIS v2.X. It comprises tools for the analysis, interpretation and visualization of hydrogeological and hydrochemical data and quality issues, also focusing on advanced time series analysis. It interfaces models related to the hydrological cycle and water resources management: flow models, transport models, crop growth models, management and optimization models. And it contains tools to perform model calibration, sensitivity analysis and uncertainty quantifications. Finally, the some additional tools are present for general GIS operations to prepare input data, and post-processing functionalities (module OAT – Observation and Analysis Tool).

Back

Bicycle-sharing stations: profiling and availability prediction

Home

Speaker Raphaël Delhome
RoomAW1.126
TrackGeospatial
Time09:30 - 10:00
Event linkView original entry

This presentation will show an exploratory data analysis about bicycle-sharing stations in two French cities (Lyon and Bordeaux).



Keywords: Data Science, Prediction, Machine Learning, Python, Open Data, GIS

Thanks to Open Data portals, bicycle-sharing availability data are freely accessible.
The main issue linked to these data is to predict bicycle availability for each sharing station.



The talk will follow a classic data workflow:






After a short introduction to Luigi, a data pipeline Python library, the second
part will show how to cluster sharing stations starting from their hourly
availability profile. The clustering effort will be done with KMeans, one of the
most popular unsupervised Machine Learning models. Then, some features
engineering methods will be carried out in order to prepare the data for
availability prediction. As a consequence a short-term (/e.g./ one hour) bicycle
availability prediction will be proposed.



A word will be said about the set of Python libraries used in this project:
luigi, pandas, seaborn, scikit-learn, folium or xgboost.

Back

Pronto Raster: A C++ library for Map Algebra

Home

Speaker Alex Hagen-Zanker
RoomAW1.126
TrackGeospatial
Time10:00 - 10:30
Event linkView original entry

The Pronto Raster library is a C++ library for Map Algebra operations. Map Algebra is a long established conceptual framework for geographical data analysis. It is a versatile and highly generic framework, classifying local, focal, and zonal operations. However existing libraries and tools that implement Map Algebra operations are not as generic and instead of a limited set of specific functions. The Pronto Raster library aims to overcome this and provides an efficient computational framework that allows efficient implementation of local, focal and zonal operations using user-specified functions.



The library uses GDAL to access and write raster data. A central concept of the library is the Raster, which is essentially a Range that iterates over the cells in a raster of values. The core focal and zonal operations produce Expression Templates that model the Raster concept. Therefore the outcome of zonal or focal operations on one or more Rasters is another Raster that does not hold data itself but refers to the data in the input rasters and combines the data lazily once the outcome Raster is iterated over. It thus becomes possible to combine and nest operations on rasters without creating temporary files. An additional benefit is that it is trivial to apply functions to only compute a subsection of the output Raster, which in turn makes the library very amenable to future parallelization.

Back

GDAL Tips and Tricks

Home

Speaker Jeremy Mayeres
RoomAW1.126
TrackGeospatial
Time10:30 - 11:00
Event linkView original entry

I've learned a lot about using GDAL over the years at Planet, from how to manage the installation to using it in Python and ending up with some modern use-case with Cloud-optimized GeoTIFFs

Back

GRASS GIS in the sky

Home

Speaker Markus Neteler
Moritz Lennert
Markus Metz
RoomAW1.126
TrackGeospatial
Time11:00 - 11:30
Event linkView original entry

GRASS GIS has contained remote sensing tools for decades, catering to the needs of users since the first generations of Landsat satellites. In this talk we will present the current efforts of integrating modern data sources and modern approaches into GRASS GIS. Tools exist for pre-processing recent very high resolution images, object-based image analysis (OBIA), Lidar data handling, etc. At the same time, efforts have gone into ensuring the scalability of tools for huge data sets. The presentation will provide a short brief on the general state of GRASS GIS development, go on to an overview of the remote sensing tools available, to end with a use case on how to use GRASS GIS for time series processing in a high performance cluster/grid computing environment.

GRASS GIS has been existing for over 30 years by now and provides a very large and diverse set of state-of-the-art tools for the analysis of spatial data. Less known by many, remote sensing tools have been part of it almost from the beginning. GRASS GIS provides a series of imagery analysis tools for pre-processing (radiometric correction, cloud detection, pansharpening, etc), creating derived indices (vegetation indices, texture analysis, principal components, fourier transform, etc), classifying (management of training zones, different classifiers, validation tools), and producing other derived products such as evapotranspiration and energy balance models. Next to these tools for satellite images, other tools exist for the handling of aerial photography for creation of orthophotos, and for the import and analysis of Lidar data.



In addition to these tools, efforts have gone into integrating current state-of-the-art methods such as object-based image analysis and machine learning. A complete toolchain exists to segment images using different algorithms, to create superpixels, to collect statistics characterizing the resulting objects, and to apply machine learning algorithms for classification. New modules also include unsupervised segmentation parameter optimization and active learning. Options for pixel-based classification have also been enlarged to a host of machine learning algorithms.



A specific aspect of treating the rapidly increasing amounts of satellite data is the scalability of tools. GRASS GIS has a long tradition of computational efficiency and work is continuously ongoing to increase both computational speed and the handling of huge datasets. Most relevant tools provide the choice to treat data either completely in memory, if enough RAM is available, or with a disk-based tiling scheme that allows treating data much larger than available memory resources would otherwise allow. Through its modular structure, GRASS GIS also allows to easily parallelize certain operations, thus opening the door to the use of cluster/grid computing environments.



The presentation will provide a brief, general introduction to the state of GRASS GIS development, focusing on the recent release of version 7.4. It will then provide an overview of the different elements in the remote sensing toolbox of GRASS GIS. It will end with an explanation of how these tools can be used in grid/cluster computing environments, demonstrated through an example of processing large time series of satellite data.

Back

GeoPandas: easy, fast and scalable geospatial analysis in Python

Home

Speaker Joris Van den Bossche
RoomAW1.126
TrackGeospatial
Time11:30 - 12:00
Event linkView original entry

The goal of GeoPandas is to make working with geospatial vector data in python easier. GeoPandas (https://github.com/geopandas/geopandas) extends the pandas data analysis library to work with geographic objects and spatial operations.



Pandas is a package for handling and analysing tabular data, and one of the drivers of the popularity of Python for data science. GeoPandas combines the capabilities of pandas and shapely (python interface to the GEOS librabry), providing geospatial operations in pandas and a high-level interface to multiple geometries to shapely. It combines the power of whole ecosystem of geo tools by building upon the capabilities of many other libraries including fiona (reading/writing data with GDAL), pyproj (projections), rtree (spatial index), ... Further, by working together with Dask, it can also be used to perform geospatial analyses in parallel on multiple cores or distributed across a cluster. GeoPandas enables you to easily do operations in python that would otherwise require a spatial database such as PostGIS.

Back

Open source Big Geospatial Data analytics

Home

Speaker Marc Vloemans
RoomAW1.126
TrackGeospatial
Time12:00 - 12:30
Event linkView original entry

Big Spatial Data technology based on geospatial attributes for the Cloud is relatively new to the developer community and organisational ecosystem. To find the latest proven software one has to look across the Atlantic Ocean, where a suite of specialised open spatial solutions is emerging. For us in Europe this creates opportunities to innovate traditional mapping services into higher value added business information provision.
Meet the innovative projects from across The Big Pond such as GeoGig (versioning/data history), GeoMesa (database) and GeoWave (distributed storage). Building upon Hadoop, Spark and Cassandra we are able to integrate the latest technology for robust and affordable geospatial solutions, to deploy Big Spatial Data answers to Big Data challenges.

Every day sensors, satellites and social media generate quintillions of bytes and a large portion of the data is location aware. The geographical perspective is especially important with Big Data as it allows us to derive new insights and explanations often unrecognized without a spatial eye: to see the unseen.
This trend enables our European open communities to go beyond traditional mapping and grow into the realm of IoT, LiDAR, Smart Cities and the Connected Car.



However, Big Spatial Data technology based on geospatial attributes for the Cloud is relatively new to the developer community and organisational ecosystem. To find the latest proven software one has to look across the Atlantic Ocean, where a suite of specialised open spatial solutions is emerging. For us in Europe this creates opportunities to innovate traditional mapping services into higher value added business information provision.



Meet the innovative projects from across The Big Pond such as GeoGig (versioning/data history), GeoMesa (database) and GeoWave (distributed storage). Building upon Hadoop, Spark and Cassandra we are able to integrate the latest technology for robust and affordable geospatial solutions, to deploy Big Spatial Data answers to Big Data challenges. Use cases involve improving situational understanding, real time monitoring, better decision-making and actionable intelligence.



No need for us to re-invent the wheel, when adoption of this cutting edge technology and to build upon it will be key to keep us abreast of European Big Data developments.

Back

Spatial Support in MySQL 8.0

Home

Speaker Norvald H. Ryeng
RoomAW1.126
TrackGeospatial
Time12:30 - 13:00
Event linkView original entry

MySQL 8.0 is right around the corner, and the most important new spatial feature
is support for geography and ellipsoidal coordinate reference systems
(CRSs). The final release is not out yet, but there is a release candidate, so
we know what to expect.



In this talk we'll go on a tour of the spatial support in MySQL with a focus on
the new features in 8.0, especially those related to geography and ellipsoidal
CRSs.



What can MySQL do? Which ellipsoids/CRSs does MySQL support? Can I create my
own? Which functions can I use? How does it work? Are there limitations? These
questions, and more, will be answered by this talk.

Back

Distance computation in Boost.Geometry

Home

Speaker Vissarion Fysikopoulos
RoomAW1.126
TrackGeospatial
Time13:00 - 13:30
Event linkView original entry

What is the shortest distance between two administrative units in a city? How similar are two hurricane trajectories? In the heart of both questions there is distance computation. In this talk we will discuss distance computation in Boost.Geometry, the library that is currently being used to provide GIS support to MySQL.



We study and implement several families of algorithms for distance such as iterative, series approximation, elliptic arc length, flat earth approximation, spherical. We show particular examples using those algorithms to compute distance between points or polygons. Finally we compare them w.r.t. performance and accuracy. Our ultimate goal is a distance function that given a user defined accuracy utilize the most efficient algorithm.



We sum up by briefing next steps of development in Boost.Geometry and ideas for GSoC'18.

Back

Building Rock Climbing Maps with OpenStreetMap

Home

Speaker Viet Nguyen
RoomAW1.126
TrackGeospatial
Time13:30 - 14:00
Event linkView original entry

Traditionally natural rock climbing walls and routes have been developed on a volunteer-basis and openly shared among the climbing community. However, Open-license data set for climbing routes in electronic format is not widely available. OpenBeta Initiative project is building an open source app to make it easier for rock climbers to contribute climbing routes and GPS-coordinates to OpenStreetMap.

Traditionally natural rock climbing walls and routes have been developed on a volunteer-basis and openly shared among the climbing community. However, Open-license data set for climbing routes in electronic format is not widely available. OpenBeta Initiative project is building an app to make it easier for rock climbers to contribute climbing routes and GPS-coordinates to OpenStreetMap. In this lightning talk I will discuss how OpenBeta app uses various open source GIS frameworks and ways to overcome integration challenges with OpenStreetMap.



Project website: https://openbeta.io

Back

Building OSM based web app from scratch

Home

Speaker Nils Vierus
RoomAW1.126
TrackGeospatial
Time14:00 - 14:30
Event linkView original entry

There are a lot of tools to build a web app from scratch - as a novice you have to find your way through alle these tools and pick the right ones.

If you are an OpenStreetMap enthusiast and want to create a simple web mapping application based on OSM data you have the choice among a huge amount of software and tools.
If you are, in addition, a hobby developer with a low budget but without a strong background regarding IDEs, toolsets, provider services, databases and so on, you will be lost in the jungle…



I will describe you how I found the way through this jungle as a novice and how I built my web app finde.cash! (www.findatm.cash) from scratch.
I will cover the following topics:






OpenStreetMap is a database storing a huge amount of map relevant data and there should be a lot more small web applications using this data for useful maps!



Finding the right tool chain is crucial to get it up and running.

Back

Privacy aware city navigation with CityZen app

Home

Speaker Redon Skikuli
RoomAW1.126
TrackGeospatial
Time14:30 - 15:00
Event linkView original entry

Presenting the reasons of the initiation of the CityZen project and the vision for the near future featuring blockchain principles.

Navigating our cities using free software community based mobile apps in 'private mode' might sound like an easy task in 2017, but with a little research you will be quickly disappointed. There are related platforms out there, but some of them are lacking good User Experience, others are not community lead or are getting away from the free software freedoms due to their partnerships with companies that do not respect our privacy online. These are only some of the reasons the CityZen app was born in Tirana, as an OSM based Android app that helps us navigate our cities without tracking our location and activities. At the same time the app offers easy mobile optimized OpenStreetMap editing for easy contributions on the go.
CityZen aims to be empowered by the community in terms of development and promotion and as a digital wallet for cryptocurrency used to make payments for goods and services from POIs easily through the app. Soon CityZen will apply blockchain as a decentralized process for the management of the cryptocoin based revenue that will be gathered through the app making it one of the few platforms out there used to help you find physical POIs and buy goods without giving away your real identity.

Back

Every subway network in the world

Home

Speaker Ilya Zverev
RoomAW1.126
TrackGeospatial
Time15:00 - 15:30
Event linkView original entry

The first subway station was mapped in OpenStreetMap eleven years ago. Since then, people have been adding station and routes in each of 170 cities with subway or light rail systems. But only a few months ago the mapped routes were first used for routing. Unsurprisingly, the quality of data was bad. In this talk Ilya will explain the subway mapping principles, show common errors, talk about the community's reaction to tidying up metro systems, and present the tool to extract metro routes from OSM into easy-to-use data structures, which is used for the MAPS.ME application.

Back

Rendering map data with Mapnik and Python

Home

Speaker Hartmut Holzgraefe
RoomAW1.126
TrackGeospatial
Time15:30 - 16:00
Event linkView original entry

Mapnik is an open source toolkit for rendering maps, probably best known for producing the map tiles for openstreetmap.org. It provides a stylesheet language, input handlers for different GIS data formats, and C++ and Python API bindings.

Mapnik is an open source toolkit for rendering maps, probably best known for producing the map tiles for openstreetmap.org. It provides a stylesheet language, input handlers for different GIS data formats, and C++ and Python API bindings.



I'll take a quick tour of the different aspects of Mapnik, using the Python API bindings:






As an application I'll show step by step how to combine OSM and GPX data into a printable PDF map.

Back

Efficient and interactive 3D point cloud processing

Home

Speaker Mathieu Carette
RoomAW1.126
TrackGeospatial
Time16:00 - 16:30
Event linkView original entry

I will demonstrate the tools we use to process large scale point cloud datasets, and our interactive workflow which enables us to quickly fine-tune custom 3D modelling algorithms.

With the advent of many extensive openly acessible point cloud datasets (like Flanders' region-wide lidar dataset), processing those point clouds becomes increasingly challenging, requiering efficient, robust algorithms. While off-the-shelf algorithm exist for common tasks like ground/non-ground segmentation, advanced 3D modelling still remains mostly in the realm of tailor-made algorithms.



Starting from established processing tools like pdal, I'll show an interactive workflow to iteratively explore and develop custom 3D modelling algorithms, through the web-based jupyter interface and in particular the ipyvolume library.



Time allowing, I will discuss some work in progress for pdal, as well as upcoming tools in the jupyter ecosystem.

Back

AMENDMENT Mapping FOSDEM for accessibility

Home

Speaker Johan Van de Wauw
RoomAW1.126
TrackGeospatial
Time16:30 - 17:00
Event linkView original entry

Since last edition of FOSDEM, different volunteers have been working to create an indoor map of FOSDEM, complete with routing for wheelchairs/accessibility/...
This map is available at https://nav.fosdem.org/ .
It is created using C3NAV, an application built for the Chaos Communication Congress.
In this application I want to focus on the application, how the map was built, why plain Openstreetmap was not sufficient, but especially I hope to get some feedback: where could we improve, how can we improve integration with OSM/other applications/...



Please not that this talks replaces the talk "3D OSM Plugin API for ESA-NASA Web WorldWind" because unfortunately the presenter could not get a travel permit.

Back

Managing build infrastructure of a Debian derivative

Home

Speaker Andrew Shadura
RoomK.3.201
TrackDistributions
Time09:00 - 09:25
Event linkView original entry

Apertis is a Debian-derived platform for infotainment in automotive vehicles.



Being a Debian derivative, Apertis doesn’t use typical Debian infrastructure software, so a infrastructure to build it had to be created using Jenkins, Open Build Service and other tools to provide continuous integration and package and image builds. Having managed the build infrastructure of Apertis for some time, I’m going to share my experience about the challenges of working on it, and how we solve issues we are confronted with.

Back

GRUB upstream and distros cooperation

Home

Speaker Daniel Kiper
RoomK.3.201
TrackDistributions
Time09:30 - 09:55
Event linkView original entry

The presentation will discuss current state of GRUB upstream development
and cooperation with distributions.

Some time ago the GRUB maintainership has changed to streamline project maintenance.
Right now maintainers try to improve various developments aspects including patch
review process and cooperation with distros. So, they would like to highlight current
challenges and know better what are the current distro requirements in regards to GRUB.



The presentation will show GRUB maintainers point of view in regards to cooperation.
It will also discuss the issues related to the patches in distros which have never
been usptreamd. The GRUB maintainers have some solution proposals for the problem
which will be presented too. However, they also expect that distro maintainers will
express their opinions during Q&A session.

Back

Distributions are not democracies

Home

Speaker Richard Brown
RoomK.3.201
TrackDistributions
Time10:00 - 10:55
Event linkView original entry

This session will explore many different governance and decision making models in the Distribution world.
Few projects aspire to operate under wholly democratic principles. This talk will explore some of the many flaws and problems with this approach, and how projects often struggle to operate democratically and at scale. Alternatively the session will also discuss the benefits, and weaknesses of less democratic governance models, such as Technical Committees, Governing Boards, and Benevolent Dictators for Life. Finally the talk will explore a simple, but scalable model of Distribution governance, empowering, enabling, and supporting the contributors in your project to create an environment where "Those that do, decide".

This talk will go into some detail about the various different governance and decision making processes of various Distribution projects. This will include both technical and organisational decision making within those projects.
Examples will include the very democratic processes within the Debian Project, the more dictatorial processes within the Linux Kernel and Ubuntu, and hybrid models like seen in the Fedora Project.
Care will be made to not be too judgemental or divisive, but the session will make factual observations about the strengths and weaknesses about each example. Those strengths and weaknesses will focus on both the benefits and problems technically to the project (ie. how well it helps projects put together good code), and socially (ie. how well it helps projects be sustainable communities of hopefully happy people).



It will be the speakers conclusion that both democratic and dictatorial models are ultimately flawed and neither scale well, nor actually provide an engaging environment for new contributors to join a long established project.



The session will then present a model that focuses on the empowerment of contributors, "those that do, decide", explaining how such a model has long, ancient roots in the origins of many open source projects.
The talk will briefly hypothesise why the model is rarely found in distribution projects today, but present in some detail the core principles which Open Source projects should embrace from this philosophy, and (using openSUSE as an example), explain how to establish a healthy series of checks and balances to ensure a community following this model is both self sustainable, and relatively insulated to the problems with more democratic or dictatorial governance models often cause over time.

Back

Developing Enterprise and Community distributions at the same time, impossible ?

Home

Speaker Frederic Crozat
RoomK.3.201
TrackDistributions
Time11:00 - 11:55
Event linkView original entry

Starting with openSUSE Leap 42.2, a lot of cooperation has been done to bridge gaps between openSUSE and SUSE Linux Enterprise distributions. Things have been improving nicely with openSUSE Leap 42.3.
We'll go into details on what this cooperation means for both openSUSE contributors and for SUSE and how we ensure it takes place.



We'll share a few statictics, discuss on the new SLE15 development and how is done in harmony with openSUSE Tumbleweed (rolling release).

This talk is to share with other distributions on our best practises, learned over time, and how we enforce them. We also want to share how beneficial this work is, for all parties.



We are also eager to learn from other distributions on how they work in similar environements and if we can learn from them.

Back

Introducing BuildStream

Home

Speaker Tristan Van Berkom
RoomK.3.201
TrackDistributions
Time12:00 - 12:50
Event linkView original entry

In this talk, I will be introducing our new distribution agnostic
build tool which we presently use in GNOME for the purpose of
unifying our software build story, allowing us to achive multiple
goals using the same flexible build metadata. I will talk about the
motivations behind creating BuildStream, how it has been helping us
so far, and how the rich feature set we've created can help software
developers and integrators.

The target audience for this talk includes software developers who
have the need or experience of creating packages and bundles for
various downstreams as well as software integrators and distributors
in general.



Many build systems available today are tightly coupled with a given
distribution mechanism, be it packaging technology for specific
distributions, or bundling technology for specific platforms. As a
result, the build metadata created for building for one platform
cannot normally be reused for another target, and upstream maintainers
typically need to maintain various sets of build metadata to get
their software out to users on different platforms.



One of our goals in BuildStream is to reduce the redundancy of
build metadata overall, by providing a declarative format and
build system which allows one to model build pipelines flexibly,
allowing the same build metadata to be leveraged to perform different
tasks.



Another goal for us is to bring the software developer and the
integrator closer together. By providing a workflow where the
developer of a single module is easily capable of testing their
changes against an integrated system, or an integrator can easily
change a line of code at any level of the stack and quickly test
the integrated result, we hope to bridge the gap between development
and integration.

Back

Flatpak and your distribution

Home

Speaker Simon McVittie
RoomK.3.201
TrackDistributions
Time13:00 - 13:25
Event linkView original entry

Flatpak uses unprivileged Linux containers to install and sandbox desktop apps. Some Linux distributions are getting excited about Flatpak, while others are not so sure yet. This talk aims to describe what Flatpak is good for, how it can help your distribution, and how your distribution can help Flatpak.

This talk is intended for an audience of distribution developers who are aware of the general purpose and structure of Flatpak, but perhaps not the finer details. The same speaker will be giving an introduction to Flatpak in the Packaging devroom on Saturday.



Topics I plan to cover are:




Back

Unix? Windows? Gentoo!

Home

Speaker Michael Haubenwallner
RoomK.3.201
TrackDistributions
Time13:30 - 14:25
Event linkView original entry

In need to support Unix, Linux and Windows? Within one application ecosystem?
Having a C/C++ application to natively support both Unix/Linux and Windows operating systems is known to be a mess for both the application's Source Code and Build System. Nevertheless, developers still are expected to provide exactly that - depending on the nature and lifetime of an application. This talk is about how SSI Schäfer IT Solutions is achieving the goal of concurrently providing support for both operating system worlds within one single C/C++ application ecosystem, which was originally targetting the Unix world only.

While the compilers (talking about GNU gcc and Microsoft cl.exe here) usually do have little difference in supporting various C/C++ standards, there's major difference between their commandline interface, as well as various kernel features provided by Unix/Linux and Windows operating systems. To minimize the impact on both the application's Source Code and Build System, various helper libraries and applications have evolved over time. This talk is about how Cygwin is able to run Gentoo Prefix these days - which itself does run on multiple Unix/Linux operating systems on one hand, and some wrapper around the Microsoft Visual Studio toolchain - called Parity, providing a commandline interface similar to the (more popular) MinGW toolchain on the other hand.

Back

Distributing OS Images with casync

Home

Speaker Lennart Poettering
RoomK.3.201
TrackDistributions
Time14:30 - 15:25
Event linkView original entry

casync is a project that combines the idea of the rsync algorithm with git's idea of a content-addressable file systems into one tool for efficient delivery of OS images over the Internet. It has a focus on simplicity, accuracy and security.

casync is a new tool for delivering OS images over the Internet. It takes inspiration from the well-known tools rsync and git, and combines them in a new tool for efficient, high-frequency delivery of OS images over the Internet. It is suitable of delivering VM, container, IoT, OS images.



In this talk I'd like to explain both the usecase and the technical background of the tool.

Back

The half rolling repository model

Home

Speaker Neofytos Kolokotronis
RoomK.3.201
TrackDistributions
Time16:30 - 16:45
Event linkView original entry

Since shortly after its inception over 10 years ago, Chakra has been shipping based on a unique half-rolling repository model. We call it half-rolling, because even though we provide frequent updates to all the applications and the desktop environment we ship, we are more conservative in updating the packages at the core of the system. This model aims at providing a stable system to users while at the same time enabling them to enjoy the latest versions of their favorite applications and games. In this talk I will explain the details behind the half-rolling release model, the advantages and disadvantages it has for users and the challenges we face in Chakra's implementation.

In Chakra's half-rolling release model, packages like the linux kernel, xorg server, graphics drivers, systemd and other important system libraries upon which these packages might depend upon are updated periodically, usually a couple of times per year. All the applications and the libraries they require, together with Plasma, KDE's desktop environment that is the default and only option for Chakra, are updated continuously.



Our experience so far has shown that the half-rolling release model can fit into a variety of user profiles. Chakra's user base includes casual users, enthusiasts, developers, gamers, small businesses, and school labs. We believe other distributions could benefit from similar implementations.



Whether you are a user or a developer, your distribution's choice of release model can affect your options, workflow and system's status. This talk is intended to inform those looking for alternatives to the traditional fully rolling or standard release models that the majority of distributions adopt and learn more on the possibilities provided by the half-rolling release model.

Back

Spawny

Home

Speaker Marcel Hollerbach
RoomK.3.201
TrackDistributions
Time16:50 - 17:00
Event linkView original entry

Legacy displaymanagers are heavily relying on the design of the xserver, such as detecting if a window manager grabs the root window.
Since wayland is getting more and more attention on modern distributions, its time for something which is, platform independent, toolkit independent and lightweight.

Legacy displaymanagers are heavily relying on the design of the xserver, such as detecting if a window manager grabs the root window.
Since wayland is getting more and more attention on modern distributions, its time for something which is, platform independent, toolkit independent and lightweight.



Spawny is a software that aims for this.
It has a minimal API for a reliable way of logging a user into a system. Core design decisions are:
- Always giving a user the possibility to login, even after sessions are crashing
- Support every single application that can be started from a tty, as a session in a greeter UI



The presentation will give a overview of a few design decisions, core features, how it works, and how to work with it.

Back

Introduction to LLVM

Home

Speaker Mike Shah
RoomK.3.401
TrackLLVM Toolchain
Time09:00 - 10:30
Event linkView original entry

The goal of this talk is to introduce programmers with C++ experience to tool building with LLVM. My expectation is people know C++, have heard of, but not used LLVM. Examples provided on the slides will be small but useful snippets, and it is the expectation you will be able to build the examples provided within a few hours.

Back

Connecting LLVM with a WCET tool

Home

Speaker Rick Veens
RoomK.3.401
TrackLLVM Toolchain
Time10:35 - 11:05
Event linkView original entry

During my master project i worked on combining LLVM with a WCET tool.



Worst-Case-Execution-Time (WCET) is the longest time a program can execute, typically measured in cycles.
This information is typically of interest for code that has timing requirements such embedded systems in cars.



The project involved combining an open-source tool called SWEET with LLVM.
This involves using datastructures in LLVM to output the interface-language to the tool SWEET.
As a proof-of-concept, the ARM Cortex-m0 has been chosen and an attempt has been made to automatically generate a WCET analysis during compilation.



This talk will explain my findings in the project and explain the concept of WCET in general.

The SWEET tool is an open-source tool that allows the computation of the Worst-Case-Execution-Time of software (WCET).
WCET is important for software that has hard real-time requirements.
(example: interrupt routine that needs to finish within a pre-determined set of cycles).



SWEET interprets its own language and does not work on a processors instruction set architecture (ISA) or
high-level languages like C or C++.
The language SWEET uses is called ALF (artist2 flow language).



This ALF language is what makes SWEET so special, it is possible to transform an ISA to ALF.
The binary does not have to be de-compiled, ALF can be generated from the compiler.



For example: an add instruction like add r0, r1, r2.
{ store
{ addr 32 { fref 32 r } { decunsigned 32 0 }}
{ add 32
{ load 32 { addr 32 { fref 32 r } { dec
unsigned 32 1 }}}
{ load 32 { addr 32 { fref 32 r } { decunsigned 32 2 }}}
{ dec
unsigned 1 0 }
}
}



The goal of this project is to generate ALF code using semantic information available in LLVM
(information about what the instructions is doing like the computation).
For this purpose things like the dag-pattern and register classes are used.



During the project the arm thumb architecture was chosen to generate ALF for as an example.



It is expected that this could be used for other architectures as well.

Back

Compiler-assisted Security Enhancement

Home

Speaker Paolo Savini
RoomK.3.401
TrackLLVM Toolchain
Time11:20 - 12:00
Event linkView original entry

This talk will be about adding features to LLVM that improve the security of the code.T
We will briefly talk about side channel attacks by focusing on information leakage due to timing behaviour and introduce the concept of 'bit-slicing' as a possible countermeasure against such kind of leakage.
We'll then talk about the LADA and the SECURE projects and about my contribution: the addition to LLVM of several tools that can automatically transform sensible regions of the code into 'bit-sliced' format.
We will discuss then the benefits and the limits of such transformations.

Information leakage via side channels is a widely recognized threat to cyber security. In particular small devices are known to leak information through physical channels, i.e. power consumption, electromagnetic radiation, and timing behaviour.
Several implementation techniques and countermeasures are arising nowadays against this kind of threats, but still only fully equipped testing labs with skilled people can afford to test new implementations against leakage attacks.
The LADA project (University of Bristol, Cryptography Research Group) aims at bringing the skill of a testing lab to the desk of a developer of standard consumer devices, without the need for domain specific knowledge.
In such context I focused on the information leakages that are due to the execution time and investigated 'bit-slicing' as a possible countermeasure. I then started the design of a tool for LLVM (an LLVM pass) that works on the intermediate representation and that can transform the selected parts of the code into an equivalent 'bit-sliced' version.
Bit-slicing is just one of many features that can be added to LLVM in order to improve the security of the code.
Since my work is still in progress, the aim of my talk is to discuss the design of my tool, explain its limits and how it should be used, but also to collect any ideas about other security features that may be added to LLVM.

Back

CANCELLED Efficient use of memory by reducing size of AST dumps in cross file analysis by clang static analyzer

Home

Speaker Siddharth Shankar Swain
RoomK.3.401
TrackLLVM Toolchain
Time12:05 - 12:45
Event linkView original entry

CANCELLED The remote presentation didn't happen.



Clang SA works well with function call within a translation unit. When execution reaches a function implemented in another TU, analyzer skips analysis of called function definition. For handling cross file bugs, the CTU analysis feature was developed (Mostly by Ericsson people)[2]. The CTU model consists of two passes. The first pass dumps AST for all translation unit, creates a function map to corresponding AST. In the second pass when TU external function is reached during the analysis, the location of the definition of that function is looked up in the function definition index and the definition is imported from the containing AST binary into the caller's context using the ASTImporter class. During the analysis, we need to store the dumped ASTs temporarily. For a large code base this can be a problem and we have seen it practically where the code analysis stops due to memory shortage. Not only in CTU analysis but also in general case clang SA analysis reducing size of ASTs can also lead to scaling of clang SA to larger code bases. We are basically using two methods:-



1) Using Outlining method[3] on the source code to find out AST that share common factors or sub trees. We throw away those ASTs that won't match any other AST, thereby reducing number of ASTs dumped in memory.



2) Tree prunning technique to keep only those parts of tree necessary for cross translation unit analysis and eliminating the rest to decrease the size of tree. Finding necessary part of tree can be done by finding the dependency path from the exploded graph where instructions dependent on the function call/execution will be present. A thing to note here is that prunning of only those branches whose no child is a function call should be done.

Clang SA works well with function call within a translation unit. When execution reaches a function implemented in another TU, analyzer skips analysis of called function definition. For handling cross file bugs, the CTU analysis feature was developed (Mostly by Ericsson people)[2]. The CTU model consists of two passes. The first pass dumps AST for all translation unit, creates a function map to corresponding AST. In the second pass when TU external function is reached during the analysis, the location of the definition of that function is looked up in the function definition index and the definition is imported from the containing AST binary into the caller's context using the ASTImporter class. During the analysis, we need to store the dumped ASTs temporarily. For a large code base this can be a problem and we have seen it practically where the code analysis stops due to memory shortage. Not only in CTU analysis but also in general case clang SA analysis reducing size of ASTs can also lead to scaling of clang SA to larger code bases. We are basically using two methods:-



1) Using Outlining method[3] on the source code to find out AST that share common factors or sub trees. We throw away those ASTs that won't match any other AST, thereby reducing number of ASTs dumped in memory.



2) Tree prunning technique to keep only those parts of tree necessary for cross translation unit analysis and eliminating the rest to decrease the size of tree. Finding necessary part of tree can be done by finding the dependency path from the exploded graph where instructions dependent on the function call/execution will be present. A thing to note here is that prunning of only those branches whose no child is a function call should be done.



In CTU model in the first pass while dumping AST in memory the outlining algorithm can be applied to reduce the memory occupied by AST dumps. The outlining algorithm can be summarized by the following steps:-
To reduce the size of the tree, we can eliminate ASTs that won’t match anything else in a first pass (that is, if you don’t care about matching sub trees anyway). A hashing scheme that would store pointers to trees. Two trees would be in the same bucket if they could possibly match. They would be in different buckets if they definitely cannot match (like a bloom filter kind of setup). Then we can flatten the trees in each bucket, use the outlining technique there, and then end up with a factorization that way.
(1) Construct every AST.
(2) Say two ASTs “could be equal” if they are isomorphic to each other.
(3) Bucket each ASTs based off the “could be equal” scheme.
(4) For each bucket with more than one entry, flatten out the ASTs and run the outlining technique on the trees. At the end of each iteration, throw out the suffix tree built to handle the bucket.
Here the main point to note here is that we are eliminating AST which won’t match anything, this reduces a large number of ASTs in memory. We are using a fast subtree isomorphism algorithm for matching ASTs which takes O((k^(1.5)/logk) n ) , where k and n are the numbers of nodes in two ASTs.
For Tree pruning we are using the exploded graph concept to find the execution path when an externally defined function is called, focussing only on the variables or instructions which are affected as a result of that function call. We find like these all paths where an external function call is there, we keep these paths/branches in AST and eliminate all other branches of AST thereby reducing size of AST.



References
[1]http://llvm.org/devmtg/2017-03//assets/slides/crosstranslationunitanalysisinclangstatic_analyzer.pdf
[2] https://www.youtube.com/watch?v=7AWgaqvFsgs
[3] https://www.youtube.com/watch?v=yorld-WSOeU&t=1060s

Back

LLVM, Rust, and Debugging

Home

Speaker Tom Tromey
RoomK.3.401
TrackLLVM Toolchain
Time12:50 - 13:30
Event linkView original entry

Debugger support for Rust is good but not great. This talk will discuss the difficulties specific to Rust, and will outline a plan to modify LLVM, LLDB, and the Rust compiler to improve the Rust debugging story.

Rust is a systems programming language that originated at Mozilla. Because Rust's type system differs from that of C++, it presents some debuginfo generation challenges to LLVM. This talk provides a brief introduction to Rust with an emphasis on these differences; outlines the Rust-specific difficulties faced by LLVM and debuggers; and proposes a plan for fixing these problems. The talk is intended to be interactive, in that feedback on the proposed plan is actively encouraged. There will also be some discussion of coordination with the DWARF standard and with gdb.

Back

Heterogeneous Computing with D

Home

Speaker Kai Nacke
RoomK.3.401
TrackLLVM Toolchain
Time13:35 - 14:15
Event linkView original entry

GPU programming is popular with scientists who need massive parallel computing power. DCompute is an extension of LDC, the LLVM-based D compiler, which uses the PTX and SPIR-V targets of LLVM for a smooth integration in a system programming language.

GPU programming is popular with scientists who need massive parallel computing power. However, programming requires learning new programming languages and integrating additional compilers into the build system, making it difficult to use. Since LLVM supports GPU targets, it makes sense to extend an existing compiler. DCompute is an extension of LDC, the LLVM-based D compiler. This allows the user to write OpenCL and CUDA kernels in D. In this talk, I show what steps were necessary to integrate not only the code generation for conventional CPUs but also the code generation for GPUs. I also look at where the use of LLVM could be improved and which challenges still exist in LLVM.

Back

LLVM @RaincodeLabs

Home

Speaker Johan Fabry
RoomK.3.401
TrackLLVM Toolchain
Time14:20 - 15:00
Event linkView original entry

Raincode labs is the largest independent compiler company in the world, with a wide scope of products and services and more than 25 years of experience. Some of our products are based on LLVM and in this presentation I will talk about how we are currently using LLVM as well as presenting some plans on how we will use LLVM in the future. This talk is given from the point of view of users of LLVM, aiming to show which parts have been of help to us and where we have found things lacking. It aims to provide relevant information to both developers and users of LLVM.

Back

How to cross-compile with LLVM based tools

Home

Speaker Peter Smith
RoomK.3.401
TrackLLVM Toolchain
Time15:05 - 15:45
Event linkView original entry

In theory the LLVM tools support cross-compilation out of the box, with all tools potentially containing support for all targets. In practice getting it to work is more complicated, configuration options have to be given and the missing parts of the toolchain need to be provided. In this presentation we will go through the steps needed to use an X86 linux host to cross-compile an application to run on an AArch64 target, using as many of the LLVM tools as possible. We'll cover:
- Getting hold of the LLVM tools and libraries.
- Providing the missing bits that LLVM doesn't provide.
- The configuration options needed to make it work.
- Running the application using an emulator.



This talks is primarily aimed at users of LLVM based tools on Linux, with no specific knowledge of LLVM internals required for the majority of the material.

The LLVM based tools and libraries we will be using are:
- Clang (including the integrated assembler).
- LLD.
- Compiler-rt.
- Libc++, libc++abi and libunwind.



Some of these components can be provided pre-built, for others such as the libraries we may need to build them ourselves.



The main missing part of the toolchain that we have to provide is the C-library such as libc provided by either a multiarch Linux distribution or a standalone Linaro GCC release. For the primary example we'll build and run a Linux application that can be run on the qemu user mode emulator.

Back

Easy::jit: just-in-time compilation for C++

Home

Speaker Juan Manuel Martinez Caamaño
RoomK.3.401
TrackLLVM Toolchain
Time15:50 - 15:55
Event linkView original entry

Finally! The wonders of just-in-time compilation are available in C++:
runtime specialization, code derived from data structures, and many more!
Easy::jit provides a simple interface over the LLVM's just-in-time compiler.
No specific compiler knowledge is required!



A single function call serves as the specification for the generated code and
entry point for the just-in-time compiler.
The user can precisely control when the compilation is launched,
do it in a separate thread if desired or cache the generated code,
and manage the lifetime of the generated code.



int baz(int a, int b) { ... }



int foo(int a) {



// compile a specialized version of baz 
auto baz_2 = easy::jit(baz, _1, 2); // mimics std::bind
return baz_2(a); // run !


}



The call to easy::jit generates a mix of compiler directives and runtime
library calls, that are picked up by an especial LLVM pass.
This pass embeds metadata and bitcode versions of the C++ code in the
resulting binary.
A runtime library parses the metadata and bitcode, and generates assembly
code based on the runtime library calls in the code.



This talk introduces the Easy::jit library, the approach of a compiler-assisted library,
its current limitations, and tries to gather feedback from experts and potential users.

Back

Literate Programming meets LLVM Passes

Home

Speaker Serge Guelton (serge-sans-paille)
RoomK.3.401
TrackLLVM Toolchain
Time16:00 - 16:05
Event linkView original entry

Where is the documentation of the LLVM Pass InstCombine? Is it accurate? Is
there any default example? Is it tested?



Compiler guys love generated code. So in our obfuscating compiler, we have a
declarative format to specify tons of stuff on our passes: its name, it's
application level, its documentation, a sample usage, its options (with default
values, help string etc), but also it's priority in the pass pipeline and a few
other stuff specific to a code obfuscator. And everything is consistent, from
sphinx-generated documentation to inline help and even tests! Let's have a look
at this, and maybe influence the way it's done in LLVM.

Back

DragonFFI

Home

Speaker Adrien Guinet
RoomK.3.401
TrackLLVM Toolchain
Time16:10 - 16:15
Event linkView original entry

This talk will present DragonFFI, a Clang/LLVM-based library that allows
calling C functions and using C structures from any languages. It will show
how Clang and LLVM are used to make this happen, and the pros/cons against
similar libraries (like (c)ffi).

In 2014, Jordan Rose and John McCall from Apple presented a talk about using
Clang to call C functions from foreign languages. They showed issues they had
doing it, especially about dealing with various ABI.



DragonFFI provides a way to easily call C functions and manipulate C structures
from any language. Its purpose is to parse C libraries headers without any
modifications and transparently use them in a foreign language, like Python or
Ruby. In order to deal with ABI issues previously demonstrated, it uses Clang
to generate scalar-only wrappers of C functions. It also uses generated debug
metadata to have introspection on structures.



Here is an example of the Python API:



$ python
>>> import pydffi
>>> lib = pydffi.FFI()
>>> lib.cdef("#include <stdio.h>", ["puts"])
>>> lib.puts("Hello world!")
Hello world!
>>> lib.compile("int foo(int a) { puts("hi!"); return a+1; }")
>>> lib.foo(4)
hi!
5
>>> lib.cdef("struct A { int a; short b; }")
>>> C = lib.A(a=4,b=5)
>>> print(C.a,C.b)
4,5
>>> lib.compile('void dump(struct A* obj) { printf("From C: %d %d\n", obj->a, obj->b"); }')
>>> lib.dump(C)
From C: 4, 5


A high-level API provides the easy loading of a C library, that uses the
previous API under the hood:



>>> lib = pydffi.LoadLibrary("/lib/libc.so.6",
headers=["stdio.h","stdlib.h"],
defines=[], include_paths=[])

>>> lib.puts("Hello FFI!")
Hello FFI!


This talk will present the tool, how Clang and LLVM are used to provide these
functionalities, and the pros and cons against what other similar libraries
like (c)ffi [0] [1] are doing. It also aims at gathering feedbacks and user
needs that wouldn't be covered.



Code is available on github.



[0] https://sourceware.org/libffi/
[1] https://cffi.readthedocs.io/en/latest/

Back

A unique processor architecture meeting LLVM IR and the IoT

Home

Speaker Dávid Juhász
RoomK.3.401
TrackLLVM Toolchain
Time16:20 - 17:00
Event linkView original entry

A typical LLVM backend consists of complex passes lowering LLVM IR into valid and efficient machine code (MC). Complexity of the translation process originates from the inherent gap between LLVM IR and any mainstream instruction set architecture (ISA). Processors and their ISAs have been designed historically to be programmed by human assembly developers. Recent ISAs gained more complex features based on hardware considerations. To date, ISAs are designed with no respect for the IR of compilers. Hence, translation between IR and MC keeps being a complicated task.
What about lifting the target machine to meet LLVM Assembly instead of lowering LLVM IR to meet an unrelated ISA? A lifted target machine can simplify backend development. One may wonder, however, what else can be gained from such an endeavor? The answers to these questions might be relevant for those who are interested in hardware architectures and likewise for those working with compiler and application development.
This talk reveals how we realize a processor whose ISA is tailor-made to suit LLVM IR; what we expect from a rich ISA matching LLVM Assembly; and how we plan to utilize this technology to implement a multi-purpose architecture for IoT devices.

Imsys is a Swedish semiconductor SME with its proprietary processor architecture. Imsys processors provide a flexible, low-cost, and energy-efficient platform thanks to a microprogrammable microarchitecture implemented in the hardware core, which enables dynamic soft-reconfiguration as well.
Imsys processor cores are small and energy-efficient by design as the architecture is mostly implemented in dense, low-power, read-only memory rather than in logic circuits. Operations are defined at a relatively high level of abstraction and coded in a way which results in an efficient utilization of the different parts of the core. Hence, silicon area and power consumption are minimized.
The basic instruction set architecture (ISA) implemented in microcode is extensible with domain-specific instructions. Microcoded operations for signal processing and cryptographic features are already available for integration as special instructions or autonomous background processes in any ISA. That makes it possible to compile high-level software to an efficiently executed small footprint binary application.
A new generation of processors is currently being developed with an ISA designed specifically for LLVM. ISAL, the Imsys ISA for LLVM, provides a set of instructions matching LLVM Assembly. Besides easing the development of a corresponding LLVM backend, the tailor-made rich ISA is expected to (1) provide outstanding code density and (2) propagate software complexity into highly efficient microcode implementation of ISAL instructions. Therefore, the LLVM toolchain can easily turn high-level software into high efficiency binary code with respect to memory footprint, execution time, and energy consumption. That can be done without worrying about target-specific details since the step between LLVM IR and ISAL machine code is minimal.
The next generation Imsys processor featuring ISAL and a software ecosystem around it are planned to be released in 2018.
The proposed lightning talk provides a brief insight into how it is possible with the help of microcoding to define ISAL so that it meets LLVM IR. Our driving incentives in selecting this way of action are also discussed, especially considering how an ISA designed for LLVM helps exploiting inherent characteristics of our processor technology in order to make the platform appealing in the IoT market.

Back

Welcome & Chatting

Home

Speaker Victoria Bondarchuk
RoomK.4.201
TrackOpen Source Design
Time09:30 - 09:50
Event linkView original entry

Please feel free to come earlier, before talks start, to meet us!

Back

CANCELLED Usability made simple

Home

Speaker Renata Gegaj
RoomK.4.201
TrackOpen Source Design
Time10:00 - 10:20
Event linkView original entry

Please note that this talk has been cancelled as Renata is no longer able to attend FOSDEM.



Test the usability of any Open Source software in just a few steps with minimal resources, to get useful feedback for designers and usable interfaces for users.
We’ll go through the complete process of testing step by step and answering all the how-tos, using examples from the tests conducted for GNOME applications.



We’ll discuss about:






Most importantly, learn how to effectively share the results and have an impact on the development decision making process.

Back

A crash course on remote, moderated usability testing

Home

Speaker Sarah O'Donnell
RoomK.4.201
TrackOpen Source Design
Time10:30 - 10:50
Event linkView original entry

I’m going to show you that you don’t need to be a researcher or have a huge budget to conduct remote, moderated usability testing.



In this talk I’ll cover:




GitLab is an integrated product that unifies issues, code review, CI and CD into a single user interface. GitLab is a remote-only organization and just like our team, our users are spread across the globe. Conducting remote, moderated usability testing allows us to quickly connect with GitLab users anywhere in the world.



Usability testing is a technique used to evaluate a product by testing it with representative users. Moderated usability testing provides us with a lot of in-depth qualitative research about our users’ needs. It can help us to uncover usability problems that we weren’t aware of and to generate solutions to solve these problems.



Conducting remote, moderated usability testing needn’t be expensive nor do you have to be a researcher. I’m going to show how you can run your own remote usability studies on your open-source projects with a limited budget.

Back

So we have free web fonts; now what?

Home

Speaker Nathan Willis
RoomK.4.201
TrackOpen Source Design
Time11:00 - 11:20
Event linkView original entry

The number of free-software fonts has exploded, thanks to CSS webfont services like Google Fonts and Open Font Library. But open fonts have yet to make gains in document-creation systems beyond web pages: print-on-demand publishing, print-on-demand merchandise, eReaders and EPUB generation, games, and bundled with FOSS applications. This talk will look at the obstacles, bottlenecks, and disconnects behind this situation and explain what needs to happen next in order to move forward.

The number of free-software fonts has exploded since 2011, thanks primarily to CSS webfont services like Google Fonts and Open Font Library. But open fonts have yet to make gains in document-creation systems beyond web pages. This is attested to by the lack of open fonts used in other service types and communities of practice, including print-on-demand publishing, print-on-demand merchandise, eReaders and EPUB generation, games, and even in the default fonts bundled into binary packages of free-software applications like LibreOffice.



This talk will look at the obstacles, bottlenecks, and disconnects that have prevented open fonts from reaching the hands of users beyond the CSS @font-face directive. These issues include missing or proprietary-format source files, licensing cruft, the user experience of discoverability and installation, build tools for font binaries, and character coverage.



We will also discuss solutions, including what distributions and upstream application projects can do to mitigate these issues as well as what the broader free-software community can do to advocate for the usage of free-software fonts in documents and display typography outside of the browser window.

Back

Self-host your visual assets with Free Software

Home

Speaker Elio Qoshi
RoomK.4.201
TrackOpen Source Design
Time11:30 - 11:50
Event linkView original entry

We are going to introduce Identihub, a self hosted solution for visual asset hosting licensed under the AGPL v3 license and show how to easily make your SVG files convertable for non designers

Free Open Source Software often fails to gain wider traction due to focusing on its technical aspects over other aspects. While Documentation, Design and Marketing quickly fall behind. Let's have a look at basic steps we can take as free software maintainers to offer potential contributors access to visual assets the same way we offer them access to our source code. We are going to make the process easy by introducing Identihub, a self hosted solution for visual asset hosting licensed under the AGPL v3 license. We will go through easily making your SVG files convertable for non designers without the need to send files back and forth via email.

Back

Our Open Source Design collective

Home

Speaker Jan-Christoph Borchardt
RoomK.4.201
TrackOpen Source Design
Time12:00 - 12:20
Event linkView original entry

For everyone who doesn’t know what exactly we do, this is a short intro to our collective: We work to raise the profile of good design in open source software, and connect developers & designers to make it happen.



We run an Open Source Design community forum, organize design tracks at well-known events like FOSDEM (hello ;), FOSSASIA and OpenTechSummit, have a job board to get designers involved, provide open design resources to developers & designers, and more.



We will also take our GROUP PHOTO during this session! :)

A few of the news from last year include:




Back

Improving GitLab's Navigation and Design System

Home

Speaker Dimitrie Hoekstra
RoomK.4.201
TrackOpen Source Design
Time12:30 - 12:50
Event linkView original entry

A brief introduction to what GitLab is and what remote working means. How did we improve and ship our revised navigation, plus how are we creating a consistent design language and system.

In the release 9.4 of GitLab we took a big step toward improving our navigation here at GitLab. We conducted several rounds of exploration and research, plus took an initial opt-in approach when we introduced this new feature. In this talk, we'll go in-depth about why this was necessary and how we came to this conclusion. We'll go over what lessons are learned and how we continue to improve.



After this, we'll go why it is so important to have a solid design language and system. What is our approach to taking on such a daunting task and how will our future goals benefit from this.



All of the topics above are told from the perspective of a fully remote design team working with open source software, including a wide community.

Back

Cultural interpretations of Design and Openness

Home

Speaker Carol Chen
RoomK.4.201
TrackOpen Source Design
Time13:00 - 13:20
Event linkView original entry

I'm not a designer, but I've lived and worked in 3 different continents with both proprietary and open source software, and find my appreciation and idea of what "good design" is has changed with my experience and exposure. As I moved from software development to community outreach and marketing, I learn more about how design affects every aspect of the product and promotion. To top it off, I've encountered many different understanding of what openness is, especially when related to design. In this presentation I'd like to share some observations and lessons learned, and hope it can help designers, especially open source designers, navigate and negotiate the diversity in cultures and expectations.

Back

Ecosystems of Professional Libre Graphics Use

Home

Speaker ginger coons
RoomK.4.201
TrackOpen Source Design
Time13:30 - 13:50
Event linkView original entry

Libre Graphics magazine spent five years showing off excellent work done with Free/Libre and Open Source graphics software. We showed off the work of individuals and small studios doing exciting work with F/LOSS tools. While exciting things are happening in the world of F/LOSS design, the perception of F/LOSS graphics tools as somehow less-than or other-than the “industry standard” for graphic designers persists. This presentation looks at problems of F/LOSS adoption, especially for graphic design. It asks the question “What kinds of ecosystems do we need to have to successfully do Libre Graphics (including F/LOSS, Free Cultural licenses, and Open Standards) in professional contexts?”

Libre Graphics magazine spent five years showing off excellent work done with Free/Libre and Open Source graphics software. Our aim when we started was to challenge the idea that F/LOSS tools weren’t up to the job of doing professional graphic design work, especially in print. We published eight issues of a beautifully-printed, tactile, keepable paper magazine. We showed off the work of individuals and small studios doing exciting work with F/LOSS tools.



In the two years since we stopped making Libre Graphics magazine, a lot of exciting things have happened in the world of F/LOSS design. But one of the problems that keeps hanging on is the perception of F/LOSS graphics tools as somehow something less-than or other-than the “industry standard” set of tools that graphic designers are meant to use. This presentation looks at problems of F/LOSS adoption, especially for graphic design. It asks the question “What kinds of ecosystems do we need to have to successfully do Libre Graphics (including F/LOSS, Free Cultural licenses, and Open Standards) in professional contexts?”

Back

Icon Themes

Home

Speaker Ecaterina Moraru
RoomK.4.201
TrackOpen Source Design
Time14:00 - 14:20
Event linkView original entry

Software platforms need to be highly extensible and customizable, since developers need to build on top of them and provide the best experience for users. Some users put more focus on the styling and visual aspect of their customization, others need it to be highly accessible or responsive, while others just like to have diversity in their choices.



Being able to provide multiple icon sets inside a platform is still a difficult task because of the variety of ways icons can be provided: as images, as font sets, as SVG, etc. I will present a solution for creating and using icon themes, also mentioning the limitations and the difficulties we had in implementing such a solution on our platform. Several icon libraries will be compared, showcasing the reasons, decisions and the compatibility and mapping issues we faced during the process.

Back

Interface Animation from the Future

Home

Speaker Tobias Bernard
RoomK.4.201
TrackOpen Source Design
Time14:30 - 14:50
Event linkView original entry

Animation can make interfaces better because it allows interface changes to explained visually, making them easier to grasp. However, when animating interfaces is important to consider the spatial model that is created by animations. Otherwise, it can lead to contradictions that make an interface more instead of less confusing. This talk introduces semantic animation, a way of designing interfaces that avoids contraditions by thinking about interfaces as a collection of components rather than a series of screens.

Animation is the future of interface design. It enables us to make interfaces more understandable by offloading processes from the user’s brain to the screen. However, in many cases animations are simply added as transitions between independently designed screens. This can result in animations contradicting each other spatially. I co-wrote an article about why this is a problem, and outlined a solution: Designing semantic components which change over time, and then using these to compose interfaces.



The industry seems to largely agree that this is the way forward, but there are very few interfaces implementing these ideas in a holistic way. I believe the main reason for this is that the current generation of toolkits and layout technologies is built for static layouts with strict hierarchies. This makes it prohibitively difficult to build interfaces where components move fluidly between different states.



I will talk about some of the challenges designing and implementing semantic animation both in prototypes and real-world applications, and give some general guidance on how you can make your applications more semantic.

Back

The case against "It just works" or how to avoid #idiocracy

Home

Speaker Michael Demetriou
RoomK.4.201
TrackOpen Source Design
Time15:00 - 15:20
Event linkView original entry

Design is more than usability testing and click analytics. Design always carried the dominant ideas of the era and championed them, forming cultures along the way. Today's design is immensely influenced by Steve Jobs and his "Just Works" and "Automagically" mantras. While recognizing the significance of the democratization of technology, it was done in a way that for the first time completely hides the way things work from their users. In fact it goes to great lengths to lock users out of their gadgets.



Open source is about, among other things, freedom to study. In this spirit open source design isn't only about the sources of design documents, it's about facilitating the study of the inner workings: Let's create designs that are easy to use yet teach the user how the system we are designing works. Informed users will be able to better understand why a result is not what they expected, will be able to solve more of the possible problems, and the result is being happier both with the system and with themselves.

In this presentation, we will study how design was once overtly political, along with a few notable historical design movements, analyze the current state of design, how it came to be and its effects on human intellect, and then propose a new design direction inspired in part by video games where the goal isn't to blindly guide our users to their goal, but teach them how to achieve what they need, using our software as a tool and make feel rewarded with the results of their efforts, and not the results of an "automagical" piece of software.

Back

The Open Decision Framework

Home

Speaker Damien Clochard
RoomK.4.201
TrackOpen Source Design
Time15:30 - 15:50
Event linkView original entry

The Open Decision Framework is a process for making transparent and inclusive decisions in organizations that embrace open source principles. It was introduced by Red Hat in 2016 (see link below). I got involved in this project by translating the framework in French and then I started to use it in my own company to resolve complex situations.



https://github.com/red-hat-people-team/open-decision-framework

An "Open decision" should be transparent, inclusive, and customer-centric. This framework is a process to reach actionable agreements through participatory practices such as : clearly sharing problems, collaborating purposefully with multiple stakeholders to secure diverse opinions, collecting comprehensive feedback, managing relationships and expectations across competing needs and priorities.



More generally, open decisions facilitate well-functioning meritocracies. Open source communities are meritocratic to the extent that they pragmatically value concrete contributions over formal titles and encourage ideas from all corners of an organization.



The Open Decision Framework is built around four steps :



1- Ideation
2- Planning and research
3- Design, development, and testing
4- Launch



The framework itself is an open source project, hosted at github and released under CC BY-SA. You can fork it and adapt it to your own organization. Contributions are welcome !

Back

Teleport: Local filesharing app

Home

Speaker Julian Sparber
RoomK.4.201
TrackOpen Source Design
Time16:00 - 16:20
Event linkView original entry

Teleport is an app for quickly sending files on the local network. It is designed with user experience in mind and to integrate nicely with GNOME. I'll talk about my journey developing Teleport, my first GTK app, in collaboration with Tobias Bernard.

Teleport is an app for quickly sending files on the local network.



Sadly, sharing files between two computers in the same room is still an unsolved problem in 2017. Of course there's USB keys, shared network folders, cloud storage services, and messaging apps, but they all have severe drawbacks in terms of speed, privacy, or user experience. The Apple ecosystem has AirDrop to cover this use case, but there is nothing comparable to it on GNU/Linux.



That's what Teleport is trying to fix: It offers a dead-simple interface for what should be a dead-simple task: Open the app on both machines, choose a file, the receiver gets a notification, the file is sent.



Unlike many FOSS apps, Teleport is design-driven. A designer was involved from the very beginning, and the entire app is built around providing a great end-user experience.



Teleport is a fully native GTK+3 app written in C, and integrates seamlessly with the GNOME desktop. Thanks to Flatpak, you can already use it on most GNU/Linux distributions [1].



I'll talk about my journey building this app (my first GTK app) from scratch, my collaboration with Tobias Bernard, the designer on the project, and my experience with the documentation and developer tools for the GNOME platform.



[1] http://frac-tion.com/teleport-flatpak

Back

Pitch your project

Home

Speaker Belen Barros Pena
RoomK.4.201
TrackOpen Source Design
Time16:30 - 16:50
Event linkView original entry

This session has become a bit of a tradition in the Open Source Design devroom. Every year we close by inviting FOSDEM attendees to introduce their projects to the designers in the room, and tell them the type of the design contributions they need.



I thought we might want to do this again.

Back

Programming UEFI for dummies

Home

Speaker Olivier Coursière
RoomK.4.401
TrackHardware Enablement
Time09:30 - 10:00
Event linkView original entry

With the upcoming end of legacy mode in UEFI firmware on PCs, every alternative and hobbyist operating systems, bare metal programmers and wannabe OS developers will have to deal with UEFI on modern hardware. After presenting the binary format of UEFI applications, I will focus on the use of UEFI APIs through EFI system table and UEFI protocols so you can get started.

Don’t be afraid by “FreePascal” in the subtitle : the core of this presentation is language agnostic.

Back

Rustarm AKA A project looking at Rust for Embedded Systems

Home

Speaker Benedict Gaster (cuberoo_)
RoomK.4.401
TrackHardware Enablement
Time10:00 - 10:30
Event linkView original entry

Rustyarm is a project in the Physical Computing group at the University of West of England looking at application of Rust on embedded micro controllers. UWE Sense is a new hardware and software platform for IoT, build with ARM micro controllers, Bluetooth LE and LoRaWAN, which runs a software stack written completely in Rust. While UWE Sense is a close to the metal implementation, UWE Audio, a new hardware platform for studying high performance audio using ARM micro controllers, uses Rust to implement a monadic reactive graph, supporting both an offline compiler and and Embedded DSL. UWE Audio uses safe Rust, for example, describing domain clock as generic associated types, providing both compile time guarantees that multiple streams will not be incorrectly sequenced at different sample rates, and the ability to dynamically compile for different parts of the system.



In this talk I will provide a high-level overview of the Rustyarm project, including how using Rust has made this project interesting, but also enabled providing guarantees with respect to the audio scheduler, for example. However, Rust has some short comings in the embedded domain and we provide details on some of these and what we and the wider community are doing to address them. As an example of Rust’s application in the embedded domain we present early work on UWE Audio and hardware and software platform for building digital music instruments, which as already noted is programmed with solely in Rust.

Rustyarm is a project in the Physical Computing group at the University of West of England looking at application of Rust on embedded micro controllers. UWE Sense is a new hardware and software platform for IoT, build with ARM micro controllers, Bluetooth LE and LoRaWAN, which runs a software stack written completely in Rust. While UWE Sense is a close to the metal implementation, UWE Audio, a new hardware platform for studying high performance audio using ARM micro controllers, uses Rust to implement a monadic reactive graph, supporting both an offline compiler and and Embedded DSL. UWE Audio uses safe Rust, for example, describing domain clock as generic associated types, providing both compile time guarantees that multiple streams will not be incorrectly sequenced at different sample rates, and the ability to dynamically compile for different parts of the system.



In this talk I will provide a high-level overview of the Rustyarm project, including how using Rust has made this project interesting, but also enabled providing guarantees with respect to the audio scheduler, for example. However, Rust has some short comings in the embedded domain and we provide details on some of these and what we and the wider community are doing to address them. As an example of Rust’s application in the embedded domain we present early work on UWE Audio and hardware and software platform for building digital music instruments, which as already noted is programmed with solely in Rust.



During the talk I will give a demonstration of UWE Audio and our embedded audio DSL, written in Rust. I also plan to have a number of the UWE Sense modules for people to look at, there is an App that that they can download, which talks to the sensors and logs dat to an open cloud infrastructure. The App is not developed in Rust, Nativescript is used, but the software for the sensors is. I don't plan to talk in detail about this part of our work, but I can provide links to our website and our partners, which will be launched in December 2017, and links to the software repos.



Full disclosure: UWE Audio is a reasonably new project and while we have a working system it would be misleading to say it is a complete project. For example, as our hardware platform has two ARM micro-processors, one for the control domain and one for the audio/cv domain our current compiler produces to Rust programs that are compiled separately and flashed to the devices. Our long term goal is to have the controller deploy DSP graphs to the audio processor dynamically via a Rust based API, simply in concept to OpenCL, but we are still quite a long way from reaching that final goal. That being said the project has been driven from the start with the goal of investigating Rust as an alternative to C for embedded programming and it's particular application in the audio domain and for this I believe it would be an interesting talk at FOSDEM.

Back

Mainline Linux on Motorola Droid 4

Home

Speaker Sebastian Reichel
RoomK.4.401
TrackHardware Enablement
Time10:30 - 11:00
Event linkView original entry

Sebastian will present the current state of Linux kernel support for the Motorola
Droid 4 (a smartphone from 2012), what has been done to reach it and describe
the required work left to use it properly. The target audience are non-kernel
developers, that might be interested in starting kernel work.



The work has been documented here: https://www.elektranox.org/droid4/

Back

... like real computers!

Home

Speaker Andre Przywara
RoomK.4.401
TrackHardware Enablement
Time11:00 - 11:30
Event linkView original entry

Installing an operating system on single board computers (SBCs or "Fruit-Pis") is very board specific and requires a lot of hand holding. If at all, standard distributions support only a small number of them explicitly, which leads to a lot of board specific images and distributions. This talk will show how this situation can be improved, to the point where off-the-shelf Linux (or BSD) distributions can be installed on those boards, without those distros knowing about each and every one of them. Key ingredients are standardized firmware interfaces like UEFI, stable device trees and on-board memory like SPI flash.
This should make using ARM based SBCs as easy as using (x86) PCs today: like "real computers".
On top of this ways to simplify and speed up mainline Linux kernel support are explored. Enabling kernel support for new SoCs usually takes a while, especially if the effort is driven by the community. This delays distribution support, up to a point where a certain SoC or board might become slightly dated when it's finally supported. Using more device tree features and less hardcoded kernel data would reduce the code required to support new SoCs, ideally reaching a point where new SoCs could be at least booted with existing (distribution!) kernels, just by providing the proper device tree blob.
This talk describes the idea and gives an example by looking at what can be done on Allwinner SoCs.

Back

Booting it successfully for the first time with mainline

Home

Speaker Enric Balletbo Serra
RoomK.4.401
TrackHardware Enablement
Time11:30 - 12:00
Event linkView original entry

While things have gotten a lot better, new hardware bring-up sometimes still feels like pulling teeth. With the right methodology, tools and techniques, a significant amount of time, energy (and sanity) can be saved while enabling a new board to run Linux. In this talk, we'll discuss the phased process involved in new board bring-up and the challenges it can pose, from reviewing initial schematic design to the successful upstreaming of any necessary bootloader and kernel patches. We'll also provide some examples of the process based on a board that was recently made compatible with mainline.

This presentation will help embedded hardware and software developers better understand the problems they can face during the bring-up of a board and will hopefully encourage them to work together when designing a new embedded board. Cooperation is important during the demanding work of a board bring-up in order to avoid respins of the board as much as possible as well as save time and money.



The audience is anyone interested in the ‘fuzzy’ line between hardware and software, with the focus being hardware and software developers working on kernel drivers and hardware bring-up. Attendees can expect a description on how to bring-up a new board, including tips to take into account in the schematic design phase and much more.

Back

AMENDMENT LinuxBoot: Linux as Firmware

Home

Speaker Philipp Deppenwiese
RoomK.4.401
TrackHardware Enablement
Time12:00 - 12:30
Event linkView original entry

Let Linux do it: Linux as Firmware



Tired of reinventing the wheel by implementing drivers for firmware again and again? Not with LinuxBoot!



What?
LinuxBoot is a firmware for modern servers that replaces specific firmware functionality like the UEFI DXE phase with a Linux kernel and runtime.



Why?
LinuxBoot improves boot reliability by replacing lightly-tested firmware drivers with hardened Linux drivers.
LinuxBoot improves boot time by removing unnecessary code, resulting in a 20x faster boot time (typical value).
LinuxBoot allows customization of the initrd runtime to support site-specific needs (both device drivers as well as custom executables).
LinuxBoot and its precursors are a proven approach for almost 20 years in military, consumer electronics, and supercomputing systems – wherever reliability and performance are paramount.



This talk replaces the talk "SITL bringup with the IIO framework" by Bandan Das

Back

What's new with FPGA manager

Home

Speaker Moritz Fischer
RoomK.4.401
TrackHardware Enablement
Time12:30 - 13:00
Event linkView original entry

After a brief overview of the Linux Kernel FPGA manager framework and it's (short) history,
we'll look at what's new with the FPGA Manager framework in the Linux Kernel, what's still missing, outlook on how to fix things,
and leave some time for discussion.

Back

Linux as an SPI Slave

Home

Speaker Geert Uytterhoeven
RoomK.4.401
TrackHardware Enablement
Time13:00 - 13:40
Event linkView original entry

The SPI bus connects a master with one or more slave devices. So far, Linux always assumed the master role. In v4.13, Linux finally gained slave support.
In this presentation, Geert will talk about adding SPI slave mode support to the existing SPI subsystem, and using a Linux system as an SPI slave. He will show what makes SPI special, and cover possible use cases and limitations of Linux-based SPI slave mode.
Finally he will demonstrate how he modified Linux to add support for SPI slave mode.

The SPI (Serial Peripheral Interface) bus is ubiquitous in many (embedded) systems. Devices connected to an SPI bus have master and slave roles. Traditionally, the Linux kernel always assumed the SPI master role.
In v3.19, Linux received i2c slave support. This sparked the question if Linux could ever become an SPI slave, controlled by an external SPI master, too. In v4.13, that question was finally answered with a "yes"; but not wholeheartedly.
In this presentation, Geert will talk about adding SPI slave mode support to the existing SPI subsystem, and using a Linux system as an SPI slave. This can increase the roles and functionalities Linux can perform in embedded systems.
Attendees can expect an overview of the SPI bus and the differences between SPI master and slave roles, and a comparison with other simple buses. They will understand the challenges of using Linux as an SPI slave, and can consider the implications when designing SPI protocols for use with Linux systems acting as an SPI slave. They will learn how to write SPI slave handlers for Linux, implementing the slave-side of an SPI protocol.

Back

Gnuk Token and GnuPG scdaemon

Home

Speaker Yutaka Niibe
RoomK.4.401
TrackHardware Enablement
Time13:40 - 14:00
Event linkView original entry

Gnuk is an implementation of USB cryptographic token for GnuPG.
It conforms OpenPGP card specification, implements USB CCID protocol, and supports modern ECC, i.e., Ed25519 and X25519.



I started the project in 2010, and to enable better support, I joined GnuPG development. I had a talk in FOSDEM2012. It wasn't go well as I expected in 2012.



I needed to design my own free hardware design for reference hardware. That's FST-01.
I needed to design and implement True Random Number generator of my own. That's NeuG.



For better control of hardware resource, I develop a thread library named Chopstx.



In 2016, at OpenPGP.conf, I realized that people generally hesitate new hardware. Thus, in 2017, I modified Chopstx so that it can run on GNU/Linux as emulation. Now, a user can run Gnuk on normal GNU/Linux as an emulation, by USBIP, without real physical hardware device.



In short, my talk is: Everything is free software (firmware on the device, driver on host), and hardware has free hardware design, but that's not enough. Emulation is useful, hopefully.

Back

Improving Linux Laptop Battery Life

Home

Speaker Hans de Goede
RoomK.4.401
TrackHardware Enablement
Time14:00 - 14:30
Event linkView original entry

Modern laptops can use a lot less energy then laptops from a decade ago. But in order to actually get this low energy usage the operating system needs to make efficient use of the hardware. Linux supports a lot of hardware power-saving features, but many of them are disabled by default because they cause problems on certain devices or in certain, often corner-case, circumstances. This talk will describe and look into recent efforts to enable more power-saving features by default in such a way that this will not cause regressions. The end goal of these efforts is to shave of at least 2 watt of the typical idle power-consumption of 6-9 watt for recent (Hasswell or newer) laptops. The goal is to achieve this saving on an OOTB Fedora Workstation install without the user needing to do any manual tweaks.

Back

Adding support for a mouse in libratbag

Home

Speaker Thomas H. P. Andersen
RoomK.4.401
TrackHardware Enablement
Time14:30 - 15:00
Event linkView original entry

Libratbag is a daemon to configure on-board settings on gaming mice. It has drivers to support many different mice vendors and models. Clients, like the GUI application Piper, interact with the daemon via DBUS. The on-board settings include button-mappings, resolutions, and LED colors and patterns.



The vendors all use different protocols to configure the mice, and every mouse model must have a separate .SVG and configuration file. This talk will cover the steps to add support for a mouse. From reverse engineering the protocol, writing the driver for libratbag, and creating a .SVG to be displayed in Piper.



If you own an unsupported device then you can bring it to the conference, and we can look at adding support for it after the talk.

Libratbag is a daemon to configure on-board settings on gaming mice. It has drivers to support many different mice vendors and models. Clients, like the GUI application Piper, interact with the daemon via DBUS. The on-board settings include button-mappings,resolutions, and LED colors and patterns.



The vendors all use different protocols to configure the mice, and every mouse model must have a separate .SVG and configuration file. This talk will cover the steps to add support for a mouse. From reverse engineering the protocol, writing the driver for libratbag, and creating a .SVG to be displayed in Piper.



If you own an unsupported device then you can bring it to the conference, and we can look at adding support for it after the talk.

Back

Thunderbolt 3 and Linux

Home

Speaker Christian Kellner (gicmo)
RoomK.4.401
TrackHardware Enablement
Time15:00 - 15:20
Event linkView original entry

Thunderbolt 3 is a relatively new technology to connect peripherals to a computer. Devices connected via Thunderbolt can be DMA masters and thus read system memory without interference of the operating system (or even the CPU). Version 3 of the interface provides security levels in order to mitigate the aforementioned security risk that connected devices pose to the system. As result of this connected devices need to be to be authorized by userspace via a new kernel interface. The new kernel interface additionally supports updating the firmware of devices and the host controller.
After an overview of the thunderbolt technology the specifics of the userspace enablement on GNU/Linux will be presented.

Back

Open Source BIOS at Scale

Home

Speaker Julien Viard de Galbert
RoomK.4.401
TrackHardware Enablement
Time15:20 - 15:40
Event linkView original entry

At Online/Scaleway, we built a BIOS based on coreboot, FSP and TianoCore. We are using it at scale in our datacenters. This talk will go through Why and How we did it. We will detail the Pros and Cons of the approach. Spoiler: we’re happy with the result!

Back

Automating Secure Boot testing

Home

Speaker Erico Nunes
RoomK.4.401
TrackHardware Enablement
Time15:40 - 16:00
Event linkView original entry

Short talk about the status and challenges of Secure Boot testing automation at Red Hat, done as part of the kernel UEFI testing.
The talk aims to cover the tools and platforms used for testing, as well as the coverage that it currently provides.

Back

Using KVM to sandbox firmwares from the Linux Kernel

Home

Speaker Florent Revest
RoomK.4.401
TrackHardware Enablement
Time16:00 - 16:30
Event linkView original entry

This talk will present a proof of concept (and RFC) done on arm64 platforms to use KVM to isolate EFI Runtime Services from the Linux Kernel. Security improvements and limitations of this solution will be detailed. A strong focus will be kept on the flexibility of this approach and how it could be used on other architectures or for other types of firmwares isolation.

As part of an internship for ARM during the summer 2017, I developed hypervisor-based security solutions for the Linux kernel. One of the experiments I did there resulted in an RFC available on the Linux Kernel Mailing List. In an effort to share this experiment with the broader community, I would like to detail the observed problems that led to my patchset and the inner working of the proposed solution.



While KVM is generally used by userspace tools (such as QEMU) to create general-purpose virtual machines, the proposed patchset adds an internal API to KVM so that it can be used by the kernel itself to spawn lightweight sandboxes. This internal KVM API can then be used to sandbox EFI Runtime Services on arm64 platforms and circumvent some of the security and stability problems those firmwares could cause otherwise.

Back

Crowdsupply EOMA68 Progress Report

Home

Speaker Luke Kenneth Casson Leighton
RoomK.4.401
TrackHardware Enablement
Time16:30 - 16:50
Event linkView original entry

2,500 people kindly backed the EOMA68 Libre Laptop and EOMA68-A20 Computer Card Crowdsupply Campaign last year. This talk will briefly outline the progress and some of the strange-seeming decisions that have had to have been made. It's also worth noting that at the beginning of the year, Intel! Announced! The World's! First! Ever! Modular! Computer Card! - in reality they're actually about 6th down a long list. We're not worried about them copying the concept, and will explain why during the talk (one hint: Intel backdoor spyware co-processor...)



Also the reasoning behind why the project also includes the creation of an entirely new 3D Printer (the Riki200) and also a new 3D printer Controller PCB will be explained (big hint: cost. Incredibly, an entire 3D printer's components can be sourced in Shenzhen for LESS money than a western-designed and manufactured DuetWifi 3D Controller Board!)

Back

Welcome to the Perl devroom

Home

Speaker Claudio Ramirez
Wendy G.A. van Dijk
RoomK.4.601
TrackPerl Programming Languages
Time09:35 - 09:40
Event linkView original entry

A short introduction

Back

How Carton, Docker, and CircleCI Saved my Sanity

Home

Speaker Dylan Hardison
RoomK.4.601
TrackPerl Programming Languages
Time09:40 - 10:20
Event linkView original entry

In this talk I talk provide a bit of background on how bugzilla.mozilla.org vendored all its dependencies using a combination of Carton + CircleCI + Docker, and a more in-depth look at three particular tasks that would have been impossible without this work. This talk should prepare you to go ahead and make even your really old legacy application use Carton.

Back

4 Perl web services I wrote and that you may like

Home

Speaker Luc Didry (Framasky)
RoomK.4.601
TrackPerl Programming Languages
Time10:20 - 11:00
Event linkView original entry

I'll present 4 web services I wrote with Mojolicious a Perl web framework. Simple to install and simple to use, they promote users' privacy. One is an URL shortener, one is made for images sharing, the third is for files sharing with end-to-end encryption and the last is about URLs visits statistics.

Back

Perl in the Physics Lab

Home

Speaker Andreas K. Huettel
RoomK.4.601
TrackPerl Programming Languages
Time11:00 - 11:40
Event linkView original entry

Let's visit our university lab. We work on low-temperature nanophysics and transport spectroscopy, typically measuring current through experimental chip structures. That involves cooling and temperature control, dc voltage sources, multimeters, high-frequency sources, superconducting magnets, and a lot more fun equipment. A desktop computer controls the experiment and records and evaluates data.



Some people (like me) want to use Linux, some want to use Windows. Not everyone knows Perl, not everyone has equal programming skills, not everyone knows equally much about the measurement hardware. I'm going to present our solution for this, Lab::Measurement (see also https://www.labmeasurement.de/ ). We implement a layer structure of Perl modules, all the way from the hardware access and the implementation of device-specific command sets to high level measurement control with live plotting and metadata tracking. Current work focuses on a port of the entire stack to Moose, to simplify and improve the code.

Back

Testing for testing

Home

Speaker Juan Julián Merelo
RoomK.4.601
TrackPerl Programming Languages
Time11:40 - 12:20
Event linkView original entry

"Although I have been using GitHub for assignment submission for a long time, it has been only this year, when after being fired from the Free Software Office of the university of Granada I lost the class assistant, when I felt the dire need to do some automatic checks on the assignments the students of a cloud computing class turned it.
This was not only running some tests on the code; since the students have total freedom on the language and other aspects of their project, it had to check things that went from the presence of some files to the use of GitHub issues for organization of the tasks.
This was eventually solved with a Perl script that tested every pull request made by students. And this has had a number of interesting and mostly positive side effects on the student behavior and performance, which will be examined and presented in this presentation, where I will do no theater and dress with a single tee. Promised.
Take home message is one that I have trying to drive home since the beginning, when I was talking about how Perl saved a conference I was organizing: Perl is an incredible tool for automating simple tasks that nobody thought could actually be automated; and automating things has many implications for the automator and the automatees; so Perl and daily life are always interesting and winning combinations.

Back

Perl in Computer Music

Home

Speaker Uri Bruck
RoomK.4.601
TrackPerl Programming Languages
Time12:20 - 13:00
Event linkView original entry

Perl has modules that can be used in many aspects of computer music - that is music generated where somewhere along the line a computer is involved. It can interface with MIDI, sound synthesis, analysis, and various music making tools. The talk will present some ways of using Perl to express a musician's creativity.

Back

Template toolkit translations

Home

Speaker Mark Overmeer
RoomK.4.601
TrackPerl Programming Languages
Time13:00 - 13:20
Event linkView original entry

Recently, I released Log::Report::Template, which extends Template Toolkit with a simple way to use translations. The (gnu) gettext translation infrastructure, where translations are organized via PO-files, is implemented by various perl modules. They do all extend the original (printf) formatted strings in some way or the other. Log::Report has extended the power of the translatable message ids much further than other modules, also adding features specific for generating HTML.

Back

Releasing to CPAN and GitHub

Home

Speaker Mark Overmeer
RoomK.4.601
TrackPerl Programming Languages
Time13:20 - 13:40
Event linkView original entry

The previous 17 years, I published over 1000 distributions for 65 modules to CPAN. Some modules saw over 100 versions, because my workflow was: release often. This works very well when you work alone and regularly with your modules. Recently, my needs shifted a little. There are some (minor) advantages to use GIT. And once that transfer is made, the step to GitHub is made. Everyone has his/her own way of releasing, with tricks to improve the process. I will demonstrate how my process works and how I changed my workflow. It will demonstrate how I load my whole history into git and github with little effort.

Back

AMENDMENT Presenting the Sympa Mailing List Manager and the new Sympatic CPAN module

Home

Speaker Marc Chantreux
RoomK.4.601
TrackPerl Programming Languages
Time13:40 - 14:20
Event linkView original entry

Sympa is a mailing list management software, and as such it provides a couple of standard features which most mailing list software programs provide. In addition to this basic set of features, you may customize the software given the specifications you have for your mailing service.



The talk was rescheduled from 15:00 on the same day.

Back

The Dynamo After Diffie

Home

Speaker James Ellis Osborne III
RoomK.4.601
TrackPerl Programming Languages
Time14:20 - 15:00
Event linkView original entry

The talk will be a 40 minute walkthrough of a perl6 Diffie-Hellman based example solution for biharmonic equations including references/quotes to a small number of mathematicians, a few concepts surrounding the Reimann conjecture and its fallacy of synchronicity without a grounded base, and then promoting the potential of a perfect solution to the biharmonic equation as satisfying the crux of Arzela's theorem I(Phi(n)) >= I(U) with a ubiquitous potential data structure (Phi).



The code in question is currently in use in the Nuclear Industry in FORTRAN, C/#/++ & Python, I'm retailoring the talk with students & my takeaway goal is to get a feel on how extension into a rapid development framework might be received in the community. The talk will be very flexible and there will absolutely be an option of not going very deep at all into the maths if the reception is not there.

Back

AMENDMENT Perl 6 on Jupyter

Home

Speaker Brian Duggan
RoomK.4.601
TrackPerl Programming Languages
Time15:00 - 15:40
Event linkView original entry

The Jupyter project provides a language-agnostic client-server protocol for a Read-Eval-Print Loop (REPL) and a serialization format for a REPL session. In this talk, we explore the use and implementation of a Perl 6 server ("kernel") and how it interacts with various clients, such as a web client ("notebook") and a console client. We focus on distinctive aspects of using Rakudo Perl 6 in this environment, such as using the Perl 6 metamodel's introspection capabilities for autocompletion, discovery and entry of unicode operators, and using Perl 6's asynchronous primitives for concurrent operations. We also investigate possibilities for widgets, magics, and interactive data visualization. The talks was reschedules from 13:40 on the same day.

Back

Software necromancy with Perl

Home

Speaker Dave Lambley
RoomK.4.601
TrackPerl Programming Languages
Time15:40 - 16:20
Event linkView original entry

Making ancient software work again using Perl.

Two case studies of how I’ve recovered old software. First, I recover a late 80s 4GL using Regexp::Grammars to save software after its run-time died from bitrot. Second, I use Perl to drive OpenGL and glue some games written in Turbo Pascal to SDL.

Back

Recycle Parsers With Grammar::Common in Perl 6

Home

Speaker Jeffrey Goff
RoomK.4.601
TrackPerl Programming Languages
Time16:20 - 17:00
Event linkView original entry

Perl 6 grammars and regular expressions are incredibly powerful, but with great power comes great risk of mangling Spider-Man quotes. Let's look at some of the common language patterns and learn together how to refactor them into reusable modules, complete with pluggable actions including Abstract Syntax Trees and Just-In-Time evaluators, all ready for you to add to your language parser.

Back

Next Generation Config Mgmt: Reactive Systems

Home

Speaker James Shubin
RoomUA2.114 (Baudoux)
TrackConfig Management
Time09:00 - 09:50
Event linkView original entry

The main design features of the tool include:
* Parallel execution
* Event driven mechanism
* Distributed architecture
And a:
* Declarative, Functional, Reactive programming language.



The tool has two main parts: the engine, and the language.
This presentation will demo both and include many interactive examples showing you how to build reactive, autonomous, real-time systems.
Finally we'll talk about some of the future designs we're planning and make it easy for new users to get involved and help shape the project.

The main design features of the tool include:
* Parallel execution
* Event driven mechanism
* Distributed architecture
And a:
* Declarative, Functional, Reactive programming language.



The tool has two main parts: the engine, and the language.
This presentation will demo both and include many interactive examples showing you how to build reactive, autonomous, real-time systems.
Finally we'll talk about some of the future designs we're planning and make it easy for new users to get involved and help shape the project.



A number of blog posts on the subject are available: https://ttboj.wordpress.com/?s=mgmtconfig
Attendees are encouraged to read some before the talk if they want a preview!

Back

Provisioning vs Configuration Management Deployment vs Orchestration

Home

Speaker Peter Souter
RoomUA2.114 (Baudoux)
TrackConfig Management
Time10:00 - 10:50
Event linkView original entry

There's a lot of confusion around the differences between the various terms used when talking about configuring systems. Is Jenkins an orchestration tool or a deployment tool? Can Puppet provision systems? Is Ansible config management or and orchestration?



In this talk, we're going to boil down the core of each term and talk about the approaches used and where things cross over.

Names, as we know, are one of the hardest things in computer science. And in the DevOps space, we frequently see 4 terms come up again and again, and people often blur the lines between what each one is doing:



Deployment vs Provisioning vs Orchestration vs Configuration Management



It's easy to get mixed up between the terms, especially as a lot of the vendors who specialised in one area have started expanding into other areas to diversify their offerings and create a one-stop solution.



In this talk, we're going to discuss the differences between each term, what tools and approaches work well for each, how the lines have blurred in the container world and what the future might hold.

Back

A decade of config surgery with Augeas

Home

Speaker David Lutterkort
RoomUA2.114 (Baudoux)
TrackConfig Management
Time11:00 - 11:25
Event linkView original entry

Augeas is a configuration editing tool. It parses configuration files in
their native formats and transforms them into a tree. Configuration changes
are made by manipulating this tree and saving it back into native config
files.



The tool, and this description, recently turned 10 years old, a milestone
that I will celebrate with a stroll down memory lane, looking at Augeas'
original goals and subsequent achievements over the past decade. Over that
time, it has become an important building block for configuration
management tools like Puppet and Salt, and is used by tools such as EFF's
Let's Encrypt, OSQuery, and libvirt.



I will talk about some of the basic patterns for using Augeas to perform
surgery on configuration files, share some tips on how to get the most out
of its tree structure, and how to use it to perform idempotent changes. I
will also talk about a few areas where Augeas can be improved, and where
it's use could be simplified.

Back

Cockpit: A Linux Sysadmin Session in your Browser

Home

Speaker Stef Walter
RoomUA2.114 (Baudoux)
TrackConfig Management
Time11:30 - 11:55
Event linkView original entry

Cockpit is an open source project that has built the new system admin UI for Linux. It turns Linux server into something discoverable and usable. Its goal is to remove the steep learning curve for Linux deployments. But more than that, it's a real Linux session in a web browser.

Cockpit is an open source project that has built the new system admin UI for Linux. It turns Linux server into something discoverable and usable. Its goal is to remove the steep learning curve for Linux deployments.



Cockpit lets you immediately dive into things like storage, network configuration, system log diagnosis, container troubleshooting and Kubernetes orchestration. All while being zero-footprint: It goes away when not in use. Cockpit interacts well with other management configuration tools, it reacts instantly to system changes made elsewhere.



We'll look at how Cockpit is an actual linux user session that you drive through your browser, running with user privileges, and accesses to the native system APIs and tools.



You'll be able to build new pieces of sysadmin UI as fast as you write a shell script. In fact we'll do it on stage in a few minutes.

Back

Terraform is maturing

Home

Speaker Walter Heck
RoomUA2.114 (Baudoux)
TrackConfig Management
Time12:00 - 12:25
Event linkView original entry

Terraform seems to be the configuration management of the cloud: a tool that allows us to define our infrastructure as code so it becomes automatable and testable. It suffers from a bunch of issues though. These issues have been encountered and solved before in other tools like puppet, but the story very much resembles that one. This session will draw some parallels between early puppet and early terraform days and explain what we can do to make these changes in a better way.

Back

Breaking with conventional Configuration File Editing

Home

Speaker Markus Raab
RoomUA2.114 (Baudoux)
TrackConfig Management
Time12:30 - 12:55
Event linkView original entry

While on top-level configuration management tools usually have key/value interfaces,
in the layer below other techniques are used, such as:






In this talk, we will discuss a new approach using a key/value interface in every layer of configuration access, implemented in Puppet-Libelektra.
Different to other key/value APIs Puppet-Libelektra is independent of the concrete configuration file format, abstracts from the syntax, and supports validation.

In a time-consuming user study we
found the key/value interface to be significantly faster to use. Because
of integrated validation it is also more safe, and because of local tooling
it is easier to use. Puppet-Libelektra is already used in practice: Elektra's
web and build server is managed using Puppet-Libelektra.

Back

Painless Puppet Providers

Home

Speaker David Schmitt
RoomUA2.114 (Baudoux)
TrackConfig Management
Time13:00 - 13:25
Event linkView original entry

Puppet's most powerful extension point is providing "native" types and providers. Ruby fragments that describe how puppet can interact with resources in our systems. This has been part of puppet's core code since the very first days, but adoption has been hampered by the API being tied up deeply into puppet's internals.



The new Resource API project provides a coherent, and decoupled way to define new resource types. It is based on Puppet4+ data types, making validation a breeze. The backend provider is a simple ruby class with well-defined API requirements. Together, this makes the new providers easier to read, and write, as well as easier testable.



In this talk I'll give an overview of the new API, and how to write providers using it. Basic Ruby and Puppet knowledge recommended.

Back

Cumin: Flexible and Reliable Automation for the Fleet

Home

Speaker Riccardo Coccioli
RoomUA2.114 (Baudoux)
TrackConfig Management
Time13:30 - 13:55
Event linkView original entry

Cumin is a Python API and CLI that provides a flexible and scalable way to execute multiple commands on cluster of hosts in parallel. It has a fine-grained hosts selection mechanism that allows to dynamically query multiple backends and combine their results. In addition to some built-in backends such as PuppetDB, SSH known hosts files, OpenStack API, it's possible to plug-in external backends. The current transport layer is SSH, although additional transport layers could be easily added. There are multiple execution strategies available and fine-tunable to suit different orchestration requirements. The executed commands outputs are automatically grouped for readability.
The talk will describe the reasons that led us to the development of Cumin and its features and show its current usage at the Wikimedia Foundation for automation and orchestration.

Back

Highly Available Foreman

Home

Speaker Sean O'Keeffe
RoomUA2.114 (Baudoux)
TrackConfig Management
Time14:00 - 14:50
Event linkView original entry

This talk will demo a 2 node Foreman cluster with a separate 2 node Smart Proxy Cluster. I will start by presenting some of the architecture choices. I will then demo some of the tips and tricks you can use to achieve HA. I will finally share some of the future planned designs for HA and how this process could be simplified in the future.

Back

Network Automation Journey

Home

Speaker Walid Shaari
RoomUA2.114 (Baudoux)
TrackConfig Management
Time15:00 - 15:50
Event linkView original entry

Network devices play a crucial role; they are not just in the Data Center. It's the Wifi, VOIP, WAN and recently underlays and overlays. Network teams are essential for operations. It's about time we highlight to the configuration management community the importance of Network teams and include them in our discussions. This talk describes the personal experience of systems engineer on how to kickstart a network team into automation. Most importantly, how and where to start, challenges faced, and progress made. The network team in question uses multi-vendor network devices in a large traditional enterprise.

NetDevOps, we do not hear that term as frequent as we should. Every time we hear about automation, or configuration management, it is usually the application, if not, it is the systems that host the applications. How about the network systems and devices that interconnect and protects our services? This talk aims to describe the journey a systems engineer had as part of an automation assignment with the network management team. Building from lessons learned and challenges faced with system automation, how one can kickstart an automation project and gain small wins quickly. Where and how to start the journey? What to avoid? What to prioritise? How to overcome the lack of network skills for the automation engineer and lack of automation and Linux/Unix skills for network engineers. What challenges were faced and how to overcome them? What fights to give up? Where do I see network automation and configuration management as a systems engineer? What are the status quo and future expectations?

Back

Zero Downtime Deployment with Ansible

Home

Speaker Stein Inge Morisbak
RoomUA2.114 (Baudoux)
TrackConfig Management
Time16:00 - 16:50
Event linkView original entry

Ansible is a radically simple and lightweight provisioning framework which makes your servers and applications easier to provision and deploy. By orchestrating your application deployments you gain benefits such as documentation as code, testability, continuous integration, version control, refactoring, automation and autonomy of your deployment routines, server- and application configuration. Ansible uses a language that approaches plain English, uses SSH and has no agents to install on remote systems. It is the simplest way to automate and orchestrate application deployment, configuration management and continuous delivery.



In this talk you will be given an introduction to Ansible and learn how to provision Linux servers with a web-proxy, a database and some other packages. Furthermore we will automate zero downtime deployment of a Java application to a load balanced environment.



We will cover how to provision servers with:
* an application user
* a PostgreSQL database
* nginx with a load balanced reverse proxy
* an init script installed as a service
* zero downtime deployment of an application that uses the provisioned infrastructure

Back

Welcome to the Legal and Policy Issues devroom

Home

Speaker Tom Marble
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time09:00 - 09:05
Event linkView original entry

Welcome to and overview of the seventh year of the Legal and Policy Issues devroom

Back

Capture the GDPR with Identity management

Home

Speaker Juraj Benculak
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time09:05 - 09:30
Event linkView original entry

Capture the GDPR with Identity management
A new era of Data Privacy begins this year and May approaches unstoppably. The media present the GDPR on a daily basis in volumes no one has expected. The whole commercial sector is hyped with articles about the GDPR, recommendations, warnings and in the end MYTHS! The GDPR is not only multinational corporations’ concern any more. Individuals and small entrepreneurs are looking for the answers too. This makes space for speculations and many partial solutions are offered in a form of a GDPR project draft templates or migration of processing into the new safe environment without appropriate paperwork. Even software solutions usually offer only partial safety measures and you will need a bunch of them to fulfill the GDPR project.
A complete solution exists only as cooperation of legal person analyzing interpretation of EU legislative and IT technician able to incorporate those requirements into daily operations. The complete solution still does not mean 100% compliance because there are so many threats you can never be utterly safe from.
This presentation will tell what you must not omit to be compliant, how to give effects to the rights of data subjects and how to craft your GDPR solution. We will discuss lawful basis for data processing, consent requirements and tools able to manage them effectively. In the end, we may think about the design of identity management tool that combines the advantages of identity management into one solution dealing with various GDPR issues.
Included GDPR areas in light of Identity management: Lawful basis and consent management, risk assessment, data breach tool, DPO’s control tool and data subject’s rights control tool.

I intent to speak about how to think about GDPR solution, what is required and how to achieve it by either customizing existing tool or designing a brand new tool of your own preferences. As a lawyer working amid programmers, in this speech I will be considering both fields of view and looking for conjunctions. Touched areas of speech are lawful basis management, consent management, risk assessment, data breaches, DPO and rights of data subjects.

Back

Artificial intelligence dealing with the right to be forgotten

Home

Speaker Cristina Rosu
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time09:30 - 09:55
Event linkView original entry

The right to be forgotten subsequens the ruling of privacy law, encountering difficulties in terms of defining its applicability in the context of continuous tech evolution. Artificial intelligence development particularly raises problems due to the nature of machine learning and the obvious differences between the memory process of humans and AI systems. The law does not have an answer apart from the general ruling of the right to be forgotten, lacking a particular focus on the context of AI nature and applicability. How does an analysis of the current legislation look like and also what can one implement de lege ferenda in order to assure the required attention to the AI context?

Privacy law is on the spotlight of the modern rechtsstaat and determined in the doctrine different opinions in how the ruling of it is appropriate. Considering the amplitude of human interactions and the juridical nature of personality rights, it encounters an accelerated evolution. The right to be forgotten, one of the component of this area, reached its first peak due to the ruling of the European Court of Justice in the matter of Google's search engine, stating that as a data controller it has to conform to the 95/46/EC Directive, the states being held to implement those principles in the national legislation.



Considering the evolution of technology, a question that arises is how do the states and international entitites de lege lata apply the notion of the right to be forgotten in the context of artificial intelligence and also what could be done in order to have consistent juridical norms in this area. 



The right to be forgotten will be analysed as concept, juridical nature, doctrine debates regarding this notion and also how does it interfere with artificial intelligence. A description of the psychological memory and AI system of "remembering" will be done in comparison, emphasising the aspects that could be relevant in the compliance with the spirit of the right to be forgotten. Considering that the fundamental differences between how human and machine learning memory processes work, it will be determined a common ground of how can deletion be considered an act of forgetting. Also, the analysis will continue on the path of finding potential legal and policy adaptations to the current statutory law.

Back

Understanding 26 U.S.C. § 501, and Organizational Governance

Home

Speaker James Shubin
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time10:00 - 10:25
Event linkView original entry

We often hear about the charitable organizations registered in the United States Federal tax system, but talking about the implications is sometimes taboo.
I'll try and talk about these issues openly, and discuss some of the consequences (both +ive & -ive) for public software.
I'll provide a list of a few well-known organizations, and explain why it both matters and doesn't.
Lastly I'll present some reporting that I did while researching this presentation.

We often hear about the charitable organizations registered in the United States Federal tax system, but talking about the implications is sometimes taboo.
I'll try and talk about these issues openly, and discuss some of the consequences (both +ive & -ive) for public software.
I'll provide a list of a few well-known organizations, and explain why it both matters and doesn't.
Lastly I'll present some reporting that I did while researching this presentation.

Back

Researchers and Software Licenses

Home

Speaker Andreas Schreiber
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time10:30 - 10:55
Event linkView original entry

In science and engineering, more and more software is published as Open Source software or uses other Open Source projects. Due to the different licenses with their requirements and restrictions as well as the resultant license compatibility issues, scientists and engineers must be aware of these issues. Ideally, they have some basic understanding about Open Source licensing. Unfortunately, in practice this understanding is not present, especially if more than one Open Source license is involved. But how do you teach scientists and engineers knowledge about open source licenses?

In this talk, we describe our strategy at the German Aerospace Center (DLR) to awake the awareness among our domain scientists for licensing issues and to enable and support them in using and publishing Open Source software without facing licensing problems. Our strategy is based on providing hands-on material and training courses first, instead of starting issuing "official" but impractical process guides. Our current focus is on knowledge sharing between peer scientists using online tools as well as face-to-face workshops. Thereby collected findings and feedback from DLR scientists have proven to be helpful to improve existing documentation and to identify further steps. Our strategy is based on years of experience and permanently updating and extending our initial approach. We want to share the so gained knowledge with other projects, rearchers, and companies to help the Open Source communities found there to improve.

Back

Comparative Law of Licenses and Contracts in the US, UK and EU

Home

Speaker Pamela Chestek
Andrew Katz
Michaela MacDonald
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time11:00 - 11:25
Event linkView original entry

This session will be a panel of lawyers from the US, the UK, and the EU. We will take on the evergreen question "it is a license or a contract," describing how their respective system distinguishes the two.

Each panelist will begin by surveying their legal system's general jurisprudence on the definition of license and contract and describing how their respective system distinguishes the two. The panelists will then apply these legal concepts to some exemplar free software licenses and offer predictions on how and to what extent one accused of breaching the license would be held accountable.

Back

Advocating For FOSS Inside Companies

Home

Speaker Richard Fontana
Carol Smith
Jilayne Lovejoy
Max Sills
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time11:30 - 11:55
Event linkView original entry

Companies who use and contribute to free and open source often have
internal advocates who make policy decisions, help with tooling
recommendations, and teach others about the importance of free and open
source. Different offices and departments will often have differences in
the ways they are managing using, contributing to, and releasing free
and open source software.

The panel will explore the different
approaches companies have to this work.

Back

A Usability Survey of Free Software Licenses

Home

Speaker Brett Smith
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time12:00 - 12:25
Event linkView original entry

We want software creators to use FOSS licenses. We also know people make mistakes in the process, or don't even try because they've heard it's "too complicated." Just like with software, we would do well to study these failures and use them as opportunities to improve the usability of our licenses. This talk aims to start that process by identifying known problems and considering some possible solutions.

Usability issues that will be addressed in this talk include:






In his work as the FSF's license compliance engineer, Brett had the opportunity to investigate the licensing of many projects for information. He saw what creators do, both right and wrong. He was also deeply engaged in the drafting processes for GPLv3 and MPLv2, and was able to compare and contrast where the resulting licenses succeeded and fell short at improving usability.

Back

Outsourcing Source Code Distribution Requirements

Home

Speaker Stefano Zacchiroli
Alexios Zavras
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time12:30 - 12:55
Event linkView original entry

A well-known obligation in some FOSS licenses is the requirement to provide "complete corresponding source code" (CCSC). After the initial collection and packaging of the CCSC, its provision imposes a burden that may persist for a very long time. Ensuring that the CCSC is always available is not a simple matter, especially considering that the original development team might change structure, people might be replaced or change roles, legal entities may disappear, etc.



In this talk we will present the possibility of using Software Heritage, an independent, non-profit third party, in order to outsource CCSC provision. As part of the exploration, we will review the legal obligations in popular copyleft licenses, explain the burden of long-term CCSC hosting, describe the hosting infrastructure in place, and propose a publishing workflow that might help FOSS producers painlessly comply with the licenses.

Back

Too young to rock'n'roll (and to contribute)

Home

Speaker Dominik George
Niels Hradek
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time13:00 - 13:25
Event linkView original entry

Young people obviously are big users of technology. While this carries its own legal questions, wanting children to be creative and maybe contribute to technology is a really difficult topic. Licences, hosting terms of service, contributor agreements - all these legal documents are made by adults, for adults, and minors often struggle to be able to agree them, even with parental consent. We want to look at what the issues are and raise awareness for the importance of contributions by children.

For many developers, contributing to open source projects is a trivial thing to do. Fire up GitHub, fork that project, do some changes, commit, push, create pull request - done! Also, for most project maintainers, getting contributions looks as easy - jsut have someone log in to GitHub, fork,… and so on.



But everyone of us also knows that most of the time, we do not care too much about analysing the terms we submit to. As a contributor, we don't really care about the project's licence as long as it is free, and as a maintainer, we expect contributors to do exactly that. Same goes for the terms of use of our favourite Git hosting service, translation tool, you name it.



One thing that makes things very easy for adult developers is that we are legally able to accept just about anything that might be written in such terms - but at least one subgroup of developers can't: children younger than a certain age limit.






If any free software project has children as a target group, we (the people at Teckids e.V.) strongly believe it should make contributions by minors as easy as its usage. This involves caring about all legal documents that might get in the way.



In this podium, we would like to raise some awareness and discuss experiences, ideas, possibilities, and the like in order to make the FOSS world a good place for everyone, independent of their age.

Back

Harmonize or Resist?

Home

Speaker Deb Nicholson
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time13:30 - 13:55
Event linkView original entry

There's a lot of pressure from the US (and some of it's allies) to "harmonize" with American ideas about patents and copyrights. The response by different nations has been wildly different -- some have chosen to play along while others have chosen to resist. What makes sense for one country won't make sense for another and it's all in the details. This talk examines existing legal patterns, the state of local economies and varying trade relationships in an effort to survey what kinds of resistance are possible or effective.

There's a lot of pressure from the US (and some of it's allies) to "harmonize" with American ideas about patents and copyrights. The response by different nations has been wildly different -- some have chosen to play along while others have chosen to resist. What makes sense for one country won't make sense for another and it's all in the details. This talk examines existing legal patterns, the state of local economies and varying trade relationships in an effort to survey what kinds of resistance are possible or effective.



These issues have implications for not only free software activists, but for anyone who is concerned about local sovereignty and freedom of expression. Laws are written for the powerful to help them maintain their power, and resistance is always difficult. But what if we could share not only our code, but our strategies for passing laws, rearranging policy and carving out a safe place for free software and free culture to flourish?



This is a top level survey of the global state of software patents and copyright law. Both local and global policies affect our ability to build things that are needful or locally useful, even when they aren't profitable. Local innovation is our best chance to solve many of our local problems, so let's get to it!

Back

People can't care when they don't know

Home

Speaker John Sullivan
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time14:00 - 14:25
Event linkView original entry

We go through a lot of work and angst as a community over licensing --
what is free, what is not, what is open source, what is compatible
with what, which software is to use which license. Then, after all the
work put into these decisions, the result is hidden away, only to be
seen or become relevant in the event of some legal challenge or
insider decision. Licensing information is by and large not
sufficiently communicated to end users, even though we are trying to
build a movement of users who prefer freely licensed software.

Two years ago in this Devroom, I talked about license choosers like
Github's choosealicense.com, and how they might influence license
selection for new projects. In response to that talk, Github made some
important improvements. How are they doing now, and what about other
sites and systems where users frequently obtain software, like Google
Play, the Chrome Web Store, and the Firefox Add-ons library? Too often
we find that such sites do not display license information at all in
key places, or if they do, it's in a way that is not as clear or as
strong as many of us in the free software movement would like to see.
How can users prefer free software when they aren't given the info
they need to choose it? I'll survey the scene, highlight some
examples, and talk about how they can be addressed while considering
the objections/concerns of the site operators.

Back

Public money, public code, the Italian way

Home

Speaker Giovanni Battista Gallus
Fabio Pietrosanti (naif)
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time14:30 - 14:55
Event linkView original entry

FSFE has recently launched the campaign "Public Money, Public code”, which promotes legislation requiring that publicly financed software developed for the public sector is made publicly available under a Free and Open Source Software licence. However, under the Italian Digital Administration Code, we already have a provision (amended in 2016) which seems to have similar effects. We tried to use it as a legal hack to make available the source code of publicly financed whistleblowing software.

The campaign "Public Money, Public code”, recently launched by the FSFE, aims at convincing the representatives to propose legislation requiring that publicly financed software developed for the public sector is made publicly available under a Free and Open Source Software licence. However, under the Italian Digital Administration Code (d.lgs 82/2005) we already have a provision (amended in 2016), in art. 69, which states that all public administrations have a duty to make available custom-made publicly financed software, together with the documentation, and to release it under an open licence, in free use, to other public administrations and other legal entities (with a few exceptions, regarding public security, national defense and the electoral process). We tried to use art. 69 as a legal hack to make available the source code of publicly financed whistleblowing software, from some municipalities and publicly controlled companies. The outcome has not been entirely satisfactory.



The talk will be given by Giovanni Battista Gallus (fellow) and Fabio Pietrosanti (President) of the Hermes Center for Transparency and Digital Human Rights. The mission of the Hermes Center is to promote and develop in the society the awareness of and the attention to transparency and accountability, be they related to the society-at-large or not. Our goal is to increase the citizens’ involvement in the management of matters of public interest and to boost the active participation of workers and employees to the correct management of corporations and companies they work for.



Giovanni Battista Gallus:
Lawyer, ISO27001 Lead Auditor, freesoftware advocate, Former President of @CircoloGT, Nexa Fellow. ITLaw, privacy, security & drones.



Copyright, Criminal, Data Protection/Privacy and IT and New Technologies law are his main areas of expertise. In the last two years, he is devoting a significant part of his pratice to the legal aspects of UAVs (drones) After a cum laude degree in Law in Italy, he moves to Great Britain for the Master of Laws in Maritime Law e Information Technology Law at the University College London – UCL. Afterwhile, he earns a PhD. In 2009 he obtains the European Certificate on Cybercrime and Electronic Evidence (ECCE). He is ISO 27001:2005 Certified Lead Auditor (Information Security Management System). Member of the Bar of Cagliari since 1996, admitted to the Supreme Court since 2009, he is a member of the Department “Informatica Giuridica” at the Università Statale of Milan and he is a teacher at the Post-Graduate Course in Digital crime and Digital Forensics. Fellow of Nexa Center on Internet e Society and of the Hermes Center for Transparency and Digital Human Rights. Author of several publications on the above mentioned areas and speaker at the main national and international congresses, he sides his legal profession an intense teaching activity, mainly in the field of copyright, Free/Open Source Software, data protection, IT security and digital forensics. Former President of Circolo dei Giuristi Telematici, founded in 1998, first initiative to gather IT law experts in Italy.



Fabio Pietrosanti:
Fabio Pietrosanti has been part of the hacking digital underground with the nickname “naif” since 1995, while he’s been a professional working in digital security since 1998. President and co-founder of the Hermes Center for Transparency and Digital Human Rights, he is active in many projects to create and spread the use of digital tools in support of freedom of expression and transparency.



Member of Transparency International Italy, owner of Tor’s anonymity nodes, Tor2web anonymous publishing nodes, he is among the founders of the anonymous whistleblowing GlobaLeaks project, nowadays used by investigative journalists, citizen activists and the public administration for anti-corruption purposes. He is an expert in technological innovation in the field of whistleblowing, transparency, communication encryption and digital anonymity.



As a veteran of the hacking and free software environment, he has participated to many community projects such as Sikurezza.org, s0ftpj, Winston Smith Project, Metro Olografix, among others. Professionally, he has worked as network security manager, senior security advisor, entrepreneur and CTO of a startup in mobile voice encryption technologies.

Back

What's the difference between all those open data licenses?

Home

Speaker Marc Jones
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time15:00 - 15:25
Event linkView original entry

My talk will briefly review the major open data licenses including the differences between them and their interaction with free software licenses. Particular emphasis will be placed on thier application to databases.



There are only a few popular open data licenses with a focus on open database licensing, but there is very little guidance on the differences between them. This talk will explain the differences between them, why they exist, and when you might prefer one over the other. I will also discuss sui generis database right, why it exists and how is it different than the traditional copyrights.

This talk will be a whirlwind introduction to the common open data licenses that exist.



I will outline the the major open data licenses including:
* Creative Commons Zero Public Domain Dedication (CC0)
* Open Data Commons Public Domain Dedication and Licence (PDDL)
* Open Data Commons Attribution License (ODC-By)
* Open Data Commons Open Database License (ODbL)
* Creative Commons Attribution (CC BY)
* Creative Commons Attribution-ShareAlike (CC BY-SA)



Will also review some lesser known open data licenses including the recently released licenses created by the Linux foundation. We will look at what uses the authors may have had in mind that motivated them to create the licenses.
* Community Data License Agreement – Permissive, Version 1.0
* Community Data License Agreement – Sharing, Version 1.0
* United Kingdom's Open Government License
* Canadian Open Government License.



We will discuss why do open data licenses exist and how are they different then free software licenses. We will look at the structure of the licenses to see the legal differences the licenses grant and how they operate. For example will discuss which of these are public domain dedications, licenses, or agreements. We will also talk about when or if that even matters.



After looking at the structure of the licenses we will turn to looking whats differences between the licenses matter for content creators and ask does it matter which license you use.



We will also delve into special considerations of database licensing like the protection and use of personal data, and how the "thin" copyright protection provided to databases impacts the design of open data licenses.



Finally we will ask do you need to use an open data license in your free software project?

Back

The Future of Copyleft: Data and Theory

Home

Speaker Luis Villa
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time15:30 - 15:55
Event linkView original entry

Using (copylefted!) data from the 32 repositories and 2.5M projects covered by libraries.io, we'll survey the state of copyleft. This will include the growth of AGPL, the reciprocal scope of the GPL, and what stacks copyleft has been most successful in. We'll then use the data to inform a look at theory and discuss where copyleft might be going: where is copyleft's success? where and how is that relevant in the modern software landscape? what directions might future copyleft licenses take?

General note: scheduling permitting (they're also running a dev room), I may be joined by Andrew Nesbitt or Ben Nickolls, co-founders of Libraries.io.



Many of the various assessments of license "popularity" have been skewed either by using a very limited data set (e.g., Debian, Fedora, which cover ~1/100th of FOSS) or by being proprietary/unreproducible (Black Duck, etc.) In this talk I'll discuss another open data source by diving into the licensing data from Libraries.io, which we believe to be the largest open repository of information about packaged FOSS. Because it includes dependency data, it can tell us not only about numeric usage, but also about relative importance and position in the stack of various licenses; and since it has a notion of "stack" (tied to repositories) it can inform some of our intuitions about how license usage varies by stack. We'll use this to assess



(The biggest shortcoming of the Libraries.io data is that, because there are no "repos" per se, it does not cover C/C++/core operating system components; I'll also discuss this in the talk.)



After discussing the current state of copyleft using data, we'll discuss what the future of copyleft might look like. This discussion will be informed by the data, but not limited to the data. Among other things, I'll discuss the theoretical value of copyleft in a world where FOSS has "won", indicated demand for copyleft in the culture and data space (e.g., institutional partners pushing for data in CC 4 & CDLA; interest in non-licensing solutions), License Zero, the legal challenges of copyleft in the SaaS space, and the growing concerns about developer sustainability. [Tidelift, my company, is working in this sustainability space, but obviously I'll avoid making the talk a pitch for the company.]



I will likely conclude (contingent on some further data analysis) that there continues to be interest and demand for copyleft, but that a major driver of the perceived (and in some cases very real) decline of GPL + friends is a result of the inadequacy of our current copyleft legal tools. My hope is that this will be a call to action to the community to continue innovating around copyleft.

Back

Gutenberg to Google Fonts: the sordid history of typeface licensing issues

Home

Speaker Nathan Willis
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time16:00 - 16:25
Event linkView original entry

Fonts sit at a peculiar crossroads in the software license-compliance world. They contain executable instructions as well as static, visual data. They are binary files that, even when "open", are often shipped without source code. They are sensitive to namespace collision problems but are only exposed in user interfaces by name. The files themselves are governed by copyright, but the design they encode is not considered copyrightable in the US and other jurisdictions. Furthermore, the typemaking industry has long vacillated on the appropriateness of reviving, reusing, and extending earlier works as new designs. This talk provides an overview of the intellectual-property law and the community norms that concern sharing, reusing, and extending typeface designs. It will help developers navigate the intellectual-property and license-compliance issues they may encounter when using and redistributing free-software and open-source fonts.

Fonts sit at a peculiar crossroads in the software license-compliance world. They contain executable instructions as well as static, visual data. They are binary files that, even when "open", are often shipped without source code. They are sensitive to namespace collision problems but are only exposed in user interfaces by name. The files themselves are governed by copyright, but the design they encode is not considered copyrightable in the US and other jurisdictions. Furthermore, the typemaking industry has long vacillated on the appropriateness of reviving, reusing, and extending earlier works as new designs.



This talk provides an overview of the intellectual-property law and the community norms that concern sharing, reusing, and extending typeface designs. It highlights several specific issues relevant to modern digital font projects:



• The law has struggled to keep up with industry practices on the subject of copying a competitor's type design. In the cold-metal era, foundries routinely copied and sold designs originating from the competition, even mechanically reproducing metal type in bulk. When digital fonts arrived in the 20th Century, the files themselves were originally not regarded as intellectual property, and wholesale copying picked up once again. Today, the files are considered copyrighted, but the designs are not. This can leave users in an uncertain position, as modern tools allow designs to be copied digitally, without copying the original file, and the law has not drawn clear lines around what practices are permissible.
• Reviving a historical typeface is generally considered an acceptable practice if the design process begins with primary materials (such as metal types, proofs, or out-of-copyright prints). There is fierce disagreement, however, about how far in the past a designer must go before a typeface is considered fair game; revivals of typefaces that were "works for hire" by a corporate foundry or printer in particular are controversial because ownership of the intellectual property is debatable. Moreover, there is disagreement about how the original designers of a typeface can and should be credited in a revival, particularly when the revival makes alternations and updates. Users need to be particularly aware of these issues when selecting typefaces for branding and advertising purposes, which can attract public criticism.
• Font names can attract more attention than the visual design itself, for the simple reason that installed fonts are, traditionally, searchable and accessible on computer systems only by their name. Name collisions between fonts are increasingly common, yet little effort has gone into addressing these collisions through trademark law.
• The leading license used for free-software fonts, the SIL Open Font License (OFL), is often misunderstood, and those misunderstandings can place developers and users of open fonts in a bind on compliance issues. For example, the OFL does not require source-code availability, but it does include restrictions limiting the circumstances under which the fonts can be sold. It also includes an optional clause that, at the licensor's discretion, requires users to change the user-visible name of the font if any alteration is made to the binary, including common practices like subsetting the font to deliver it over the web. Downstream projects that user OFL-licensed fonts and assume that the OFL is broadly compatible with common free-software licenses may not be aware when license-compliance problems occur.



There are no easy answers for resolving these issues in free software, but this talk will provide developers and communities with advice for identifying and coping with licensing issues relating to the fonts that they utilize in their projects.

Back

Organizer's Panel

Home

Speaker Tom Marble
Bradley M. Kuhn
Karen Sandler
Richard Fontana
RoomUA2.220 (Guillissen)
TrackLegal and Policy Issues
Time16:30 - 16:55
Event linkView original entry

The organizers of the Legal & Policy Issues DevRoom will reflect on recent developments in software freedom policy and law, and will discuss some of the topics and issues raised in this year's and past year's DevRooms.

Back

LPI Exam Session 3

Home

Speaker LPI Team
RoomUB2.147
TrackCertification
Time09:30 - 11:30
Event linkView original entry

LPI offers discounted certification exams at FOSDEM

As in previous years, the Linux Professional Institute (LPI) will offer discounted certification exams to FOSDEM attendees.
LPI offers level 1, level 2 and level 3 certification exams at FOSDEM with an almost 50% discount.



For further information and instructions see https://fosdem.org/certification.

Back

LibreOffice Exam Session 1

Home

Speaker LibreOffice Team
RoomUB2.147
TrackCertification
Time12:00 - 13:00
Event linkView original entry

LibreOffice Certifications are designed to recognize professionals in the areas of development, migrations and trainings who have the technical capabilities and the real-world experience to provide value added services to enterprises and organizations deploying LibreOffice on a large number of PCs.

In the future, LibreOffice Certifications will be extended to Level 1 and Level 2 Support professionals.



The LibreOffice Certification is not targeted to end users, although Certified Training Professionals will be able to provide such a service upon request (although not as a LibreOffice Certification). In general, end user certification is managed by organizations with a wider reach such as the Linux Professional Institute.

Back

LibreOffice Exam Session 2

Home

Speaker LibreOffice Team
RoomUB2.147
TrackCertification
Time13:30 - 14:30
Event linkView original entry

LibreOffice Certifications are designed to recognize professionals in the areas of development, migrations and trainings who have the technical capabilities and the real-world experience to provide value added services to enterprises and organizations deploying LibreOffice on a large number of PCs.

In the future, LibreOffice Certifications will be extended to Level 1 and Level 2 Support professionals.



The LibreOffice Certification is not targeted to end users, although Certified Training Professionals will be able to provide such a service upon request (although not as a LibreOffice Certification). In general, end user certification is managed by organizations with a wider reach such as the Linux Professional Institute.

Back

LibreOffice Exam Session 3

Home

Speaker LibreOffice Team
RoomUB2.147
TrackCertification
Time15:00 - 16:00
Event linkView original entry

LibreOffice Certifications are designed to recognize professionals in the areas of development, migrations and trainings who have the technical capabilities and the real-world experience to provide value added services to enterprises and organizations deploying LibreOffice on a large number of PCs.

In the future, LibreOffice Certifications will be extended to Level 1 and Level 2 Support professionals.



The LibreOffice Certification is not targeted to end users, although Certified Training Professionals will be able to provide such a service upon request (although not as a LibreOffice Certification). In general, end user certification is managed by organizations with a wider reach such as the Linux Professional Institute.

Back

Introduction to web development in C++ with Wt 4

Home

Speaker Roel Standaert
RoomUB2.252A (Lameere)
TrackEmbedded, mobile and automotive
Time10:00 - 10:30
Event linkView original entry

This talk is an introduction to Wt, a server-side web framework written in C++. Wt 4 is the latest version and introduces a more modern C++11-based API.

Wt is a server-side web framework written in C++. Unlike many server-side web frameworks, Wt is widget-based. It is designed to deliver a development experience similar to desktop UI frameworks, abstracting away the underlying web technologies. If JavaScript support is unavailable, it will even fall back to plain HTML automatically. This allows C++ developers to quickly develop highly interactive applications. Because it's written in C++, Wt is especially well-suited for embedded platforms.



Wt 4, released in September 2017, updates its API to use C++11. The result is that the API of Wt 4 is more clear and exception safe, relies on fewer Boost dependencies, and compiles faster.



This talk will be an introduction to Wt 4 for C++ programmers. We will show how to make a "hello world"-style web application with Wt, and demonstrate some selected features of Wt. By the end of the talk, the audience should have enough knowledge to get started with Wt 4. No knowledge of older versions of Wt is required.

Back

How to build autonomous robot for less than 2K€

Home

Speaker Miika Oja (PuluMan)
RoomUB2.252A (Lameere)
TrackEmbedded, mobile and automotive
Time10:30 - 11:00
Event linkView original entry

Telepresence, Delivery Boy, Security and Follow Me in one PULUrobot. PULUrobot solves the autonomous mobile robotics complexity issue without expensive parts, without compromise.



By fearless integration and from-scratch design, our platform can do SLAM, avoid obstacles, feed itself, and carry payload over 100kg, for less than 2000EUR.



Application ecosystem can be born around it, as we offer a ready-made Open Source (GPLv2) solution in a tightly coupled HW-SW codesign.



No need to solve a ROS module puzzle, or pay $50000 for a closed product anymore!

Pulu Robotics Oy's chief HW/SW designer Antti Alhonen presents the new open source startup to the community.



Pulu Robotics Oy was founded in July, 2017, in Finland, to solve our own needs, with an efficient team of three. No one had prior knowledge on robotics.



By studying the market and other startups, we realized the common mistake is to use "robotic modules" as building blocks. They are highly expensive, provide little bang for buck, often are inefficient, and require complex software middleware (such as ROS) as the glue inbetween.



Due to our combined background in mechanical, electrical, software and manufacturing design, we took the approach of designing as much as possible by ourselves.



A Bill-of-Materials example: instead of spending $100 on a BLDC motor driver module, our $100 main PCB integrates two such motor controllers - and a 100W li-ion charger, 5V 10A power supply, MEMS compass, accelerometer, two 3-axis gyroscopes, power IO, and a powerful microcontroller.



Having low-level calculation resources available right where the sensor data is first acquired means good data synchronization, low-latency reaction, and this simplifies the higher level software requirements. We actually love the power of embedded!



During the process, we found out that we simply don't need ROS - we already solved most of the puzzle on the lowest level possible, simply and efficiently.



It's often said that it's hard and slow to develop custom HW/SW from scratch. This has clearly not been the case: it took us 4 months from total zero to completely design all the electronics, embedded software, simple prototype Simultaneous Localization and Mapping (SLAM) software, etc., to build the first actual prototype that could autonomously explore its surroundings, perform navigation tasks, and find its own charger. At that point, we thought about the open source business model, established the company, thinking: this is too good to keep in our own basement.



The code footprint for embedded, higher level backend, and user interface frontend is currently 30 000 lines total.



We still have a long way to go. We are now selling the very first generation of robots for the early adopters, hoping to give a kick start to the open source community as soon as possible. Behind the curtains, we are focusing on the development of our next 3D sensor system, which will replace the current scanning 2D lidar with a 360x90 degree full 3D distance data, and do it for the same price we currently pay for the Scanse 2D lidar used in the first small-scale production batch.

Back

Drive your NAND within Linux

Home

Speaker Miquèl Raynal
RoomUB2.252A (Lameere)
TrackEmbedded, mobile and automotive
Time11:00 - 12:00
Event linkView original entry

NAND flash chips are almost everywhere, sometimes hidden in eMMCs, sometimes they are just parallel NAND chips under the orders of your favorite NAND controller. Each NAND vendor follows its own rules. Each SoC vendor creates his preferred abstraction for interacting with these chips.



Handling all of that requires some abstraction, and guess what? That is currently being enhanced in Linux, and that is what this talk is about!

NAND flash chips are almost everywhere, sometimes hidden in eMMCs, sometimes they are just parallel NAND chips under the orders of your favorite NAND controller. Each NAND vendor follow its own rules. Each SoC vendor creates his preferred abstraction for interacting with these chips.



Handling all of that requires some abstraction, and guess what? That is currently being enhanced in Linux! A new interface, called "exec_op" is showing up. It has been designed to match the most diverse situations. It should ease the support of advanced controllers as well as the implementation of vendor-specific NAND flash features.



This talk will start with some basics about NAND memories, especially their weaknesses and how we get rid of them. It will also show how the interaction between NAND chips and NAND controllers has been standardized over the years and how it is planned to drive NAND controllers within Linux, through the abstraction of the MTD layer (Mass Technology Device) and the NAND framework.

Back

O’PAVES: An open platform for autonomous vehicle tinkerers

Home

Speaker Fabien Chouteau
RoomUB2.252A (Lameere)
TrackEmbedded, mobile and automotive
Time12:00 - 12:30
Event linkView original entry

O’PAVES (Open Platform for Autonomous VEhicle Systems) aims at providing an open source and open hardware platform for the prototyping and development of autonomous vehicles. In the current state of the project, what we have is a remote controlled car, but with sensors that you would find on an autonomous vehicle such as IMU and distance sensors. The goal is not necessarily to implement the autonomous feature ourselves, but to allow users to do it by either modify the firmware or add and external computer such as a Raspberry Pi or OpenMV to to add autonomous driving features.



The hardware is open and was designed with FOSS tools such as Kicad and FreeCAD. It’s made of a PCB that act as the frame of the car, 3D printed parts, off the shelf parts from pololu.com (battery, motors, sensors) and and Crazyflie 2.0 nano drone. The drone (without its motors) is connected to the O’PAVES board using an extension port. This solution makes the platform easy to build because most of the very small electronic assembly is already done on the drone. The software is developed in Ada and also only uses FOSS tools like the GNAT compiler or the AdaDriversLibrary. The project is hosted on GitHub: https://github.com/adacore/opaves



In this 25 minutes talk I want to present the state of the project, explain how you can build and hack the platform and show some of the tools and techniques that we used to develop the project. I will bring one of the prototype with me and at least show a video demo if not a live demo.

Back

Rapid SPI Device Driver Development over USB

Home

Speaker Stefan Schmidt
RoomUB2.252A (Lameere)
TrackEmbedded, mobile and automotive
Time12:30 - 13:00
Event linkView original entry

On the quest for a cheap and easy way to connect some simple SPI devices to my laptop it was surprising to not find anything suitable available. To connect the SPI device to a Linux laptop over USB in order to develop a SPI kernel driver for it and having a rapid development and test cycle. None of the solutions to access the SPI device over libusb in userspace would work for me. I needed a SPI master controller inside the kernel to work with the variety of devices and kernel subsystems.

On the quest for a cheap and easy way to connect some simple SPI devices to my laptop it was surprising to not find anything suitable available. The idea is neither new nor innovative and surely there must have been something already.



Maybe the use-case was to special. To connect the SPI device to a Linux laptop over USB in order to develop a SPI kernel driver for it and having a rapid development and test cycle. None of the solutions to access the SPI device over libusb in userspace would work for me. I needed a SPI master controller in kernelspace to work with the variety of devices and kernel subsystems.



After some research I settled on the MCP2210 chip. With its cheap and easy to get development boards and an out-of-tree driver as a good start. Maybe it is also something others are looking for and it is surely worth demonstrating and explaining.

Back

Implementing state-of-the-art U-Boot port, 2018 edition

Home

Speaker Marek Vasut
RoomUB2.252A (Lameere)
TrackEmbedded, mobile and automotive
Time13:00 - 14:00
Event linkView original entry

This presentation is a practical guide to implementing U-Boot bootloader port to a new system from scratch. At the beginning, two main pilars of contemporary U-Boot, device tree (DT) support and driver model (DM), are explained. This is followed by an in-depth look at the crucial subsystems, clock, pinmux, serial, block and a few other commonly used ones. Finally, systems with limited resources and multi-stage booting is discussed. The talk includes examples and experiences from platforms recently added to mainline U-Boot.

This presentation is a practical guide to implementing U-Boot port to a new system from scratch. U-Boot is the de-facto standard bootloader for embedded systems, there is plenty of U-Boot ports, yet vast majority of those are implemented in sub-optimal way. This talk first explains the U-Boot internals, the driver model (DM) and it's interaction with device tree (DT), as understanding these is vital to understanding the implementation of core subsystems. The core subsystems are explained in detail afterward to allow developers implement drivers the intended way without hacks and workarounds. Unfortunately, not all systems have plenty of resources, but U-Boot caters for those as well. The final part of the talk discusses the U-Boot SPL, the preloader which initializes the hardware, DRAM and starts U-Boot and finer parts of this procedure, which tends to have plenty of pitfalls.

Back

Image capture on embedded linux systems

Home</