ABSTRACT.
Automatically generating invariants is a fundamental challenge in
program analysis and verification. In this talk we give a select
oveview of previous work on this problem, starting with Michael Karr's
1976 algorithm for computing affine invariants in affine programs (i.e.,
programs having only non-deterministic (as opposed to conditional)
branching and all of whose assignments are given by affine
expressions). We then present the central result of the talk---an
algorithm to compute all polynomial invariants that hold at each
location of a given a affine program. Our main tool is an algebraic
result of independent interest: given a finite set of rational square
matrices of the same dimension, we show how to compute the Zariski
closure of the semigroup that they generate.

This is joint work with Ehud Hrushovski, Joel Ouaknine, and Amaury Pouly.

ABSTRACT.
Nondeterministic Büchi automata are one of the simplest types of finite
automata on infinite words and are successfully applied in model
checking. In some settings nondeterminism is not acceptable, but
determinisation of Büchi automata is a difficult problem with a long
history and a few different solutions. In our work we noticed
similarities in two different determinisation constructions (the
constructions of Safra and the construction of Muller and Schupp) and
studied those to obtain a generalized algorithm from which both original
constructions can be recovered as special cases. More details can be
found in the attached extended abstract.

10:40

Pierre Ganty

Algorithms for inclusion into a regular language

ABSTRACT.
We revisit the classical problem to decide whether a language (regular
or context-free) is included into a regular language. By relying on
fixpoint characterizations, dualization and abstract interpretation we
obtain various algorithms. We connect those to existing algorithms.

10:50

Bruno Guillon

Determinizing two-way automata with linear-time Turing machines

ABSTRACT.
In 1968, Sakoda and Sipser conjectured that the simulation cost of the
determinization of two-way automata is exponential in the worst case. In
spite of all attempts the problem remains open today. In this work we
propose a new approach, in which the simulating deterministic machine
has further abilities besides the read-only capacity of two-way
automata. We prove a polynomial size-cost simulation of two-way
nondeterministic automata by deterministic weight-reducing Turing
machines, a syntactical restriction of linear-time Turing machines which
recognize regular languages only, as proved by Hennie in 1965. We then
discuss the cost of the conversion into weight-reducing
linear-bounded-automata, which constrain the former model by restricting
the space to the portion of the tape that initially contain the input
word.

This is a joint work with Giovanni Pighizzini, Luca Prigioniero and Daniel Průša that has been presented at DLT'18.

An Optimal Construction for the Barthelmann-Schwentick Normal Form on Classes of Structures of Bounded Degree

ABSTRACT. Building on the locality conditions for first-order logic by Hanf and Gaifman,
Barthelmann and Schwentick showed in 1999 that every first-order formula is
equivalent to a formula of the shape ∃x₁ . . . ∃xₖ ∀y φ where quantification in
φ is relativised to elements of distance ≤ r from y. Such a formula will be
called Barthelmann-Schwentick normal form (BSNF) in the following.
From BSNF, Barthelmann and Schwentick obtain a local Ehrenfeucht game
and an automata-model for first-order logic. For such applications of BSNF,
as well as for possible future uses in algorithms, the size of bsnf-formulae is
of interest.

Although Barthelmann and Schwentick show that for each first-order for-
mula, an equivalent BSNF can be computed effectively, the construction leads
to a non-elementary blow-up of the BSNF in terms of the size of the original
formula.

We show that, if equivalence on the class of all structures, or even only finite
forests, is required, this non-elementary blow-up is indeed unavoidable. We
then examine restricted classes of structures where more efficient algorithms
are possible. In this direction, we show that on any class of structures of
degree ≤ 2, BSNF can be computed in 2-fold exponential time with respect to
the size of the input formula. And for any class of structures of degree ≤ d
for some d ≥ 3, this is possible in 3-fold exponential time. For both cases, we
provide matching lower bounds.

The presented work is a joint work with Lucas Heimberg and submitted to CSL 2018.

10:40

Amazigh Amrane

Logic and rational languages of scattered and countable series-parallel posets

ABSTRACT.
Since it was established by Büchi, Elgot and Trakhtenbrot in the early
60's, the connection between automata theory and formal logic has been
intensively developed in many directions. An important application is
decision algorithms for logical theories. Since those pioneer works, the
compositional method introduced by Sheilah in 1975 gave a more general
framework than automata theory for deciding logical theories. However,
automata theory has a lot of other usages, for example the
characterisation of fragments of logics. Among the wide range of results
in this direction, let us cite for example McNaughton, Papert and
Schützenberger: a formula of the monadic second-order logic (MSO) is
effectively equivalent to a formula of its first-order fragment if and
only if its associated finite monoid is group-free. Since then, many
notions of automata have been introduced: automata on trees, on linear
orderings, etc. In this work, we focus on automata over the class of
transfinite N-free posets. They are a generalisation of branching
automata over finite N-free posets, or equivalently series-parallel
posets, and of automata over scattered and countable linear orderings.
We give an effective logical characterisation of the class of languages
recognised by such automata. The logic involved is an extension of MSO
with Presburger arithmetic, named P-MSO. The effective construction of
an automaton from a formula of P-MSO is a mere adaptation of the case of
finite words. The converse however is much less usual. Starting from a
rational expression (effectively equivalent to automata), we build a
graph, that we parse and transform into a formula of P-MSO. In the
parsing of the graph, recursivity is avoided by a MSO-expressible
identification of particular factors of posets.

10:50

Edon Kelmendi

Expressing Unboudedness in MSO + nabla

ABSTRACT.
We consider an extension of MSO on the infinite binary tree. The
extensions adds a probabilistic path quantifier that says that with
probability 1 a branch is chosen such that the formula holds in this
branch. The probability measure that is used is the coin-flipping
measure. This added quantifier properly extends MSO, but we do not know
whether this logic has decidable satisfiability problem. Here we make
progress into showing that the satisfiability problem is undecidable by
proving that the logic is powerful enough to be able to express
unboundedness of counters.

11:00

Marcin Przybyłko

On Computing the Measures of First-Order Definable Sets of Trees

ABSTRACT.
We consider the problem of computing the measure of a regular language
of infinite binary trees. While the general case remains unsolved, we
show that the measure of a language defined by a first-order formula
with no descendant relation or by a Boolean combination of conjunctive
queries (with descendant relation) is rational and computable.
Additionally, we provide an example of a first-order formula that uses
descendant relation and defines a language having an irrational measure.

Solving parity games in quasi-polynomial time -- a logician's approach

ABSTRACT.
Last year was a good year for parity games. After over 25 years of
research on algorithm to solve parity games, the first quasi-polynomial
solution was published by Calude and colleagues. Later the same year,
Jurdziński and Lazic published an independent proof of the same result,
based on a drastic improvement of the classic progress measure
algorithms.

Here I propose yet another independent algorithm for solving parity
games in quasi-polynomial time, this time using methods from logic and
descriptive complexity.

This algorithm is based on a new bisimulation invariant measure of
complexity for parity games, called the register-index, which aims to
capture the complexity of the priority assignment. Like entanglement,
register-index is defined in game-theoretic terms, using a parametrised
k-register game that is played on a parity game arena. For fixed k,
parity games of register-index up to k are solvable in polynomial time.
This adds parity games of bounded register-index to the classes of
parity games with a known polynomial-time solution. Notably, even the
class of games of register-index 1 contains non-trivial parity games of
arbitrarily high entanglement, tree-width, Rabin-index, Kelly-width,
DAG-width and arbitrarily many priorities, as well as families of games
known to exhibiting worst-case performance for recursive, strategy
improvement and progress-measure based algorithms.

I show that the register-index of parity games of size n is bounded by O(log n) and derive a quasi-polynomial algorithm.

Although the runtime of this algorithm does not improve over existing
quasi-polynomial algorithms, it allows us to relate the
quasi-polynomial solvability of parity games to their descriptive
complexity. Indeed, the winning regions of parity games of
register-index up to k and of maximal priority p can be described by a
modal mu formula of which the complexity -- as measured by its
alternation depth -- depends on k, rather than p.

This talk is based on work accepted for publication at LICS2018.

10:40

Paweł Parys

Universal trees grow inside separating automata: Quasi-polynomial lower bounds for parity games

ABSTRACT.
Several distinct techniques have been proposed to design
quasi-polynomial algorithms for solving parity games since the
breakthrough result of Calude, Jain, Khoussainov, Li, and Stephan
(2017): play summaries, progress measures and universal trees, and
register games. We argue that all those techniques can be viewed as
instances of the separation approach to solving parity games, a key
technical component of which is constructing (explicitly or implicitly)
an automaton that separates languages of words encoding plays that are
(decisively) won by either of the two players. Our main technical result
is a quasi-polynomial lower bound on the size of such separating
automata that nearly matches the current best upper bounds. This forms a
barrier that all existing approaches must overcome in the ongoing quest
for a polynomial-time algorithm for solving parity games. The technical
highlight is a proof that every separating safety automaton has a
universal tree hidden in its state space; our lower bound then follows
by establishing a quasi-polynomial lower bound for universal trees.

10:50

Thomas Colcombet

Universal parity graphs and lower bounds for parity games

ABSTRACT. This is a collaboration with Nathanaël Fijalkow.

Fix a number of priorities d. A "parity graph" is a (d-)parity
labelled graph such that all cycles have an even maximal priority (said
differently, a parity game played by Adam only, in which Eve wins). It
is "n-universal" if it contains all size-n parity graphs as subgraphs.

In this talk, I will explain how these universal parity graphs
provide neat explanations of all the subexponential algorithms for
solving parity games in the literature. In particular, in each of these
algorithms, one can find in its structure a universal parity graph, and
conversely, one can devise a variant of the algorithm for each choice of
a universal parity graph.
As a consequence, it explains the recent result of Czerwinski, Fijalkow,
Jurdzinski, Lazic and Parys, and shows that all these algorithms face
the same difficulty if one attempts to further optimization.

11:00

Tom van Dijk

Solving parity games with tangles

ABSTRACT. Parity games are well known for their applications in formal verification and synthesis, especially to solve
both the model-checking and synthesis problems of the modal mu-calculus and related logics like LTL.
We have published two novel contributions to this field in the past year and are working on a third.
This presentation is based on publications at TACAS'2018 and CAV'2018 containing the following contributions.

Oink is a new implementation of parity game solvers much like the well known PGSolver, but it has a much
improved practical performance and we use Oink to perform a new modern comparison of parity game solvers.
The tool is designed for easy integration with other toolchains and for easy replication of research results.

We propose a new algorithm to solve parity games called tangle learning. The idea is that all algorithms for
parity games explore so-called "tangles" in the parity game and they often repeat exploring the same tangle
again and again, which can lead to exponential runtimes. The insight ("highlight") of the tangle learning algorithm
is that these tangles can be combined with the attractor computation and then we can use this with a
memoization strategy ("learning") to never explore the same tangle twice. The tangles are very much
related to another concept called "distractions" that we're currently developing further, and we can show
that the interaction of tangles and distractions drives parity game difficulty.

ABSTRACT.
In this talk I describe how a combination of symbolic computation
techniques with first-order theorem proving can be used for solving some
challenges of automating program analysis, in particular for generating
and proving properties about the logically complex parts of software. I
will first present how computer algebra methods, such as Groebner basis
computation and symbolic summation help us in inferring properties of
program loops with non-trivial arithmetic. I will then further extend
our work to generate first-order properties of programs with unbounded
data structures, such as arrays. For doing so, I will use
saturation-based first-order theorem proving and extend first-order
provers with support for program analysis. I will review some of
our recent results on Craig interpolation and proving properties in the
full first-order theories of data structures. Our work is implemented in
the Vampire theorem prover and successfully applied on benchmarks from
the software verification and automated reasoning communities.

The
Automatic Data Scientist: Making Data Science Easier using High-level
Languages, Fractional Automorphisms, and Arithmetic Circuits

ABSTRACT.
Our minds make inferences that appear to go far beyond standard
data science. Whereas people can learn richer representations and
use them for a wider range of learning tasks, machine learning
algorithms have been mainly employed in a stand-alone context,
constructing a single function from a table of training examples.
In this talk, I shall touch upon a view on data science that can
help capturing these human learning aspects by combining
high-level languages and databases with statistical learning,
optimisation, and deep learning. High-level features such as
relations, quantifiers, functions, and procedures provide
declarative clarity and succinct characterisations of the data
science problem at hand. This helps reducing the cost of modelling
and solving it. Putting probabilistic deep learning into the data
science stack, it even paves the way towards one of my dreams,
the automatic data scientist — an AI that makes data analysis
and reporting accessible to a broader audience of non-data
scientists.

This talk is based on joint works with many people such as Carsten
Binnig, Martin Grohe, Zoubin Ghahramani, Martin Mladenov, Alejandro
Molina, Sriraam Natarajan, Robert Peharz, Cristopher Re, Karl
Stelzner, Martin Trapp, Isabel Valera, and Antonio Vergari,
among others.

14:20

Daniel Neider

Learning Linear Temporal Properties

ABSTRACT.
Making sense of the observed behavior of complex systems is an
important problem in practice. It arises, for instance, in debugging
(especially in the context of distributed systems), reverse engineering
(e.g., of malware and viruses), specification mining for formal
verification, and modernization of legacy systems, to name but a few
examples.

To help engineers understand the dynamic (i.e., temporal) behavior of
complex systems, we develop algorithms to learn linear temporal
properties from data. More precisely, the problem we address in this
talk is the following: given two disjoint, finite sets of infinite
words, representing positive and negative examples, construct a
(minimal) LTL formula such that all positive examples satisfy the
formula, while all negative example do not. As the resulting formulas
are meant to be read and understood by humans, our learning algorithms
are designed to learn minimal LTL formulas, or at least "small" formulas
with a "simple" structure. We also discuss interesting directions for
future work.

14:50

Dan Olteanu

Learning Models over Relational Databases

ABSTRACT.
In this talk I will overview recent results on learning classification
and regression models over training datasets defined by feature
extraction queries over relational databases. I will show that the
complexity of this task can be connected with known notions of widths
that measure the complexity of relational queries, such as the
fractional hypertree width. This complexity can be much lower than that
of the state of the art approach that first materialises the training
dataset. Recent joint work with collaborators from industry shows that
this approach can speed up real analytical workloads by several orders
of magnitude over state-of-the-art systems.

I will also highlight on-going work on linear algebra over databases and point out exciting directions for future work.

This work is based on long-standing collaboration with Maximilian
Schleich and Jakub Zavodny and more recent collaboration with Mahmoud
Abo-Khamis, Hung Ngo, and XuanLong Nguyen.

Definable Ellipsoid Method, Sums-of-Squares Proofs, and the Graph Isomorphism Problem

ABSTRACT.
I will discuss several mathematical programming relaxations of the
graph isomorphism problem: the Sherali-Adams hierarchy of LP
relaxations, its strengthening via the Lasserre hierarchy of SDP
relaxations, and its relaxation via Groebner basis computations. I will
present a result which completes a full cycle of implications to show
that, for the graph isomorphism problem, the relative strength of those
relaxations is the same, up to a small loss in the level of the
hierarchy. This is joint work with Albert Atserias.

16:00

Oleg Verbitsky

On the First-Order Complexity of Subgraph Isomorphism

ABSTRACT. Let F be a fixed pattern graph with k vertices.
The Subgraph Isomorphism problem asks whether a given host graph G
contains a subgraph isomorphic to F. In the induced version of the problem,
the question is whether G has an induced copy of F.
Each of the two problems can be expressed by a first-order sentence
of quantifier depth k. We investigate the question whether this can
be done more succinctly with respect to the quantifier depth and
the variable width. A more detailed 2-page abstract is uploaded
as a pdf file.

This is joint work with Maksim Zhukovskii.

16:10

Sandra Kiefer

Distinguishing Graphs via Logics with Counting

ABSTRACT.
The Weisfeiler-Leman algorithm is a combinatorial procedure that plays a
crucial role both in theoretical and practical research on the graph
isomorphism problem. For every k there is a k-dimensional version of the
algorithm, which iteratively refines a partition of the set of k-tuples
of vertices of the input graph. Two graphs are non-isomorphic if (but
not only if) their final partitions computed by the algorithm differ.

In this talk, I will give a very brief introduction to the mechanisms
of the Weisfeiler-Leman algorithm and present some of our results:

We have studied the number of iterations which the algorithm takes to
stabilize. We have found new upper bounds for the iteration number of
the 2-dimensional algorithm as well as recently a tight new lower bound
for the 1-dimensional algorithm, which I will discuss in the talk. By a
famous result established by Cai, Fürer and Immerman, a graph is
identified by the k-dimensional Weisfeiler-Leman algorithm if and only
if it is definable in C^(k+1), first order logic with counting and
restricted to k + 1 variables. Via this correspondence, the number of
iterations the algorithm needs to distinguish two graphs corresponds to
the quantifier depth of the corresponding counting logic.

16:20

Svetlana Popova

Non-convergence in existential monadic second order logic of random graphs

ABSTRACT.
We study existential monadic second order (EMSO) properties of
undirected graphs. In 2001, J.-M. Le Bars proved that, for EMSO,
zero-one fails. Formally, there exists an EMSO sentence about undirected
graphs such that the probability that a random graph satisfies it
doesn't converge (here, the probability distribution is uniform over the
set of all graphs on the labeled n-set of vertices). In the same paper,
he conjectured that, for EMSO sentences with 2 first order (FO)
variables, the zero-one law holds (every sentence is either true
asymptotically almost surely (a.a.s.), or false a.a.s.). We disprove
this conjecture. In particular, we give an example of EMSO sentence with
2 monadic variables and FO quantifier depth 2 without convergence.
Moreover, this sentence can be rewritten using 1 monadic variable and 2
FO variables. We also consider the sparse binomial random graph (all
edges appear independently with probability n^{-\alpha},
0<\alpha<1) and prove that the EMSO zero-one fails for this random
graph is well. The history of the problem, formal definitions, and the
proofs are collected in the attached pdf. This is joint work with Maksim
Zhukovskii.

Regular Transducer Expressions for Regular Transformations

ABSTRACT. Functional MSO transductions, deterministic two-way transducers, as well as
streaming string transducers are all equivalent models for regular functions.
In this paper, we show that every regular function, either on finite words or on
infinite words, captured by a deterministic two-way transducer, can be described
with a regular transducer expression (RTE). For infinite words, the
transducer uses Muller acceptance and \omega-regular look-ahead.
\RTEs are constructed from constant functions using the combinators if-then-else
(deterministic choice), Hadamard product, and unambiguous versions of the Cauchy
product, the 2-chained Kleene-iteration and the 2-chained omega-iteration. Our
proof works for transformations of both finite and infinite words, extending the
result on finite words of Alur et al.\ in LICS'14.
In order to construct an RTE associated with a deterministic two-way Muller
transducer with look-ahead, we introduce the notion of transition monoid for
such two-way transducers where the look-ahead is captured by some backward
deterministic B\"uchi automaton. Then, we use an unambiguous version of Imre
Simon's famous forest factorization theorem in order to derive a ``good''
(\omega-)regular expression for the domain of the two-way transducer.
``Good'' expressions are unambiguous and Kleene-plus as well as
$\omega$-iterations are only used on subexpressions corresponding to
\emph{idempotent} elements of the transition monoid. The combinator expressions
are finally constructed by structural induction on the ``good''
($\omega$-)regular expression describing the domain of the transducer.

16:00

Mikolaj Bojanczyk

Regular and First Order List Functions

ABSTRACT.
We define two classes of functions, called regular (respectively,
first-order) list functions, which manipulate objects such as lists,
lists of lists, pairs of lists, lists of pairs of lists, etc. The
definition is in the style of regular expressions: the functions are
constructed by starting with some basic functions (e.g. projections from
pairs, or head and tail operations on lists) and putting them together
using four combinators (most importantly, composition of functions). Our
main results are that first-order list functions are exactly the same
as first-order transductions, under a suitable encoding of the inputs;
and the regular list functions are exactly the same as
MSO-transductions.

A paper at LICS 2018, joint with Laure Daviaud and Krishna Shankara Narayanan
https://arxiv.org/abs/1803.06168

16:10

Maria Emilia Descotte

Resynchronizing Classes of Word Relations

ABSTRACT.
A natural approach to define binary word relations over a
finite alphabet A is through two-tape finite state automata
that recognize regular languages over {1,2}xA, where (i,a) is
interpreted as
reading letter a from tape i. Accordingly, a word w in a language L
denotes the pair (u,v) in A*xA* in which u is the projection of w onto
1-labelled letters and v is the projection of w onto 2-labelled letters.
While this formalism defines the well-studied class of Rational
relations (a.k.a. non-deterministic finite state transducers),
enforcing restrictions on the reading regime from the tapes,
which we call "synchronization", yields various sub-classes
of relations. Such synchronization restrictions are imposed
through regular properties on the projection of the language
onto {1,2}. In this way, for each regular language
C contained in {1,2}*, one obtains a class Rel(C)
of relations. Regular, Recognizable, and length-preserving rational
relations
are all examples of classes that can be defined in this way.

We study the problem of containment for synchronized classes
of relations: given C,D regular languages contained in {1,2}*, is
Rel(C) a subset of Rel(D)? We show a
characterization in terms of C and D which gives
a decidability procedure to test for class inclusion.
This also yields a procedure to re-synchronize languages
from {1,2}xA preserving the denoted relation
whenever the inclusion holds. This is joint work with D. Figueira and G. Puppis.

ABSTRACT.
Quantitative extensions of parity games have recently attracted
significant interest.
These extensions include parity games with energy and payoff conditions
as well as finitary parity games and their generalization to parity
games with costs.
Finitary parity games enjoy a special status among these extensions, as
they offer a native combination of the qualitative and quantitative
aspects in infinite games:
The quantitative aspect of finitary parity games is a quality measure
for the qualitative aspect, as it measures the limit superior of the
time it takes to overwrite an odd color by a larger even one.
Finitary parity games have been extended to parity games with costs,
where each transition is labelled with a non-negative weight that
reflects the costs incurred by taking it.
We lift this restriction and consider parity games with costs with
arbitrary integer weights.
We show that solving such games is in NP $\cap$ coNP, the signature
complexity for games of this type.
We also show that the protagonist has finite-state winning strategies,
and provide tight exponential bounds for the memory he needs to win the
game.
Naturally, the antagonist may need need infinite memory to win.
Finally, we present tight bounds on the quality of winning strategies
for the protagonist.

16:00

Laure Daviaud

Solving mean-payoff parity games in pseudo-quasi-polynomial time.

ABSTRACT.
In a mean-payoff parity game, one of the two players aims both to
achieve a qualitative parity objective and to minimize a quantitative
long-term average of payoffs (aka. mean payoff). The game is zero-sum
and hence the aim of the other player is to either foil the parity
objective or to maximize the mean payoff.

Our main technical result is a pseudo-quasi-polynomial algorithm for
solving mean-payoff parity games. All algorithms for the problem that
have been developed for over a decade have a pseudo-polynomial and an
exponential factors in their running times; in the running time of our
algorithm the latter is replaced with a quasi-polynomial one.

Our main conceptual contributions are the definitions of strategy
decompositions for both players, and a notion of progress measures for
mean-payoff parity games that generalizes both parity and energy
progress measures. The former provides normal forms for and succinct
representations of winning strategies, and the latter enables the
application to mean-payoff parity games of the order-theoretic machinery
that underpins a recent quasi-polynomial algorithm for solving parity
games.

This is a joint work with Marcin Jurdzinski and Ranko Lazic.

16:10

Thomas Zeume

Introduction to Iltis: An Interactive, Web-Based System for Teaching Logic

ABSTRACT.
The Iltis project aims at providing a web-based, interactive system
that supports teaching logical methods. In particular the system shall
(a) support to learn to model knowledge and to infer new knowledge using
propositional logic, modal logic and first-order logic, and (b) provide
immediate feedback and support to
students. The current prototype supports the above tasks for
propositional logic.

In this talk I will present the status of the project, outline
arising theoretical questions, and report on first impressions on using
the prototype in a second year logic course for computer science
students.

The talk is based on joined work with Gaetano Geck, Artur Ljulin, Sebastian Peter, Jonas Schmidt, and Fabian Vehlken.

16:20

Roman Rabinovich

Gralog, a visual tool for working with graphs, games, automata, logics

ABSTRACT.
We want to present Gralog, a completely new and advanced version of a
visual tool for computer scientists and graph theorists for their every
day work with graphs and similar objects and for teaching.

A mixture of a GUI and an quickly-to-learn terminal-like commands
allows one to easily generate ("add K(n)/Grid(m,n)/Path(n)...") and
change graphs, automata, games or Kripke structures (e.g. contracting
subgraphs to a single vertex, subdividing edges, changing kinds of
positions in games) or read them in any usual format. The picture can be
exported to Tikz in a human oriented, easy to edit form. From the menu
one can call some algorithms usually presented in basic computer science
courses and apply them step-wise to the graph, for example the simple
FO-model-checking algorithm or the breadth-first search.

Furthermore, it is possible to bind one's own programme (written an
any language that supports piping from and to standard input/output,
say, in Python) to Gralog to let it illustrate the steps of the
algorithm. For this one writes in the "Python"-algorithm commands like
"get graph from Gralog", "pause" (a kind of a breakpoint),
"display/track variable X", "insert/delete/color vertex/edge" and so on.
One can even use the graph data structure of Gralog (in Python!) to
concentrate on the actual algorithm. This should allow the scientist to
implement her algorithms in an easy and quick way. Gralog takes care of
maintaining graph data structures, graph formats and the visualisation
of algorithm steps. It should also be convenient to debug programmes on
graphs using Gralog.

We understand that this is a rather unusual submission for
Highlights, but we believe that the Gralog will be interesting and
useful in the community of Highlights.

Efficient and Modular Coalgebraic Partition-Refinement

ABSTRACT.
This talk presents joint work with U. Dorsch, S. Milius and L. Schröder
on a generic algorithm combining the following aspects:

(1) Partition refinement: In a state based system, partition
refinement identifies all states that exhibit the same behaviour. The
notion of "same behaviour" of course heavily depends on the concrete
type of system worked with and thus is a parameter in the form described
next.

(2) Coalgebras for a fixed endofunctor H model state based systems
with transition structure according to H. E.g. H-coalgebras for HX = 2 ×
X^A are deterministic automata, whereas for HX = Powerset(A × X), the
H-coalgebras are labelled transition systems. With assumptions on H, we
obtain:

(3) Efficiency: Given an H-coalgebra with n states and m edges, the
generic algorithm runs in O((m + n) · log n). Even though the
assumptions on H are restrictive, our algorithm minimizes transition
systems and weighted systems (with possibly negative weights) in O((m +
n) · log n), deterministic finite automata (for a fixed alphabet) in O(n
· log n), matching up with the respectively best algorithms for these
systems.

(4) Modularity is given since we can combine existing transition
structures by sequencing, products, or coproducts. Hence, the generic
algorithm handles labelled transition systems and deterministic finite
automata with non-fixed alphabet slightly slower than existing specific
implementations, but for Segala Systems we obtain a new minimization
method in O((m + n) · log(m + n)) which is slightly faster than the best
known specific implementation.

This talk presents the results of a CONCUR 2017 paper and an extended
journal version currently under review for the associated special
issue.

16:50

Julian Salamanca

Abstract MSO languages over monads

ABSTRACT.
The algebraic approach of automata and recognizable languages shown in
[2] and the use of monads as a natural categorical generalization of
universal algebra has led to the study of algebraic language theory from
a categorical point of view. On the other hand, the logical
characterization of regular languages as the MSO definable languages,
see, e.g., [3], has inspired the possibility of developing an abstract
categorical framework for MSO [1].
In this presentation, I will summarize some of the first steps towards
the study of abstract MSO languages for algebras over a monad.
Definitions and properties for an abstract MSO framework are inspired on
the classical case of words and monoids and its corresponding proof.
Such properties are stated in a categorical setting in order to restrict
the kind of monads that in some sense allows us an abstract study of
MSO languages. This is an ongoing joint work with Mikolaj Bojanczyk and
Bartek Klin.

References
[1] M. Bojanczyk. Recognisable languages over monads. CoRR, abs/1502.04898, 2015.
[2] J. Mezei and J. Wright. Algebraic automata and context-free sets. Information and Control,
11(1):3 – 29, 1967.
[3] W. Thomas. Languages, Automata, and Logic, pages 389–455. Springer, Berlin, Heidelberg, 1997.

17:00

Henning Urbat

Eilenberg Theorems for Free

ABSTRACT.
Algebraic language theory investigates the behaviors of finite machines
by relating them to finite algebraic structures. For instance, regular
languages - the behaviours of finite automata - can be described purely
algebraically as the languages recognized by finite monoids, and a
fundamental relation between these concepts is given by Eilenberg’s
variety theorem: varieties of languages (classes of regular languages
closed under boolean operations, derivatives, and preimages of monoid
morphisms) correspond bijectively to pseudovarieties of monoids (classes
of finite monoids closed under homomorphic images, submonoids, and
finite products). This together with Reiterman's theorem, which
characterizes pseudovarieties as the classes of monoids presentable by
profinite equations, establishes a firm connection between automata,
algebra, topology, and logic. Dozens of extensions of these results are
known which consider either notions of varieties with modified closure
properties, or other types languages such as omega-regular languages,
tree languages or cost functions.

In this talk, I will outline a uniform category theoretic approach to
algebraic language theory that encompasses all the above work. The key
idea is to

(1) model languages and the algebraic structures recognizing them by a monad T on some algebraic category D;

(2) interpret profinite equations and Reiterman’s theorem by means of
a codensity monad induced by T, called the proﬁnite monad; and

(3) interpret the algebraic recognition of languages in terms of
second algebraic category C that dualizes to D on the level of finite
algebras.

As a result, we obtain an Eilenberg-Reiterman correspondence which
establishes a bijective correspondence between varieties of
T-recognizable languages, pseudovarieties of T-algebras, and proﬁnite
equational theories. The classical case of regular languages is
recovered by taking the categories C = boolean algebras and D = sets,
and the free-monoid monad on D. Likewise, by varying the parameters,
almost all Eilenberg-Reiterman correspondences known in the literature
emerge as special cases of the categorical framework. In addition, a
number of new results come for free, including a variety theorem for
data languages.

The presentation is based on the recent paper:

H. Urbat, J. Adámek, L.-T. Chen, S. Milius: Eilenberg Theorems for Free.
Proc. 42nd International Symposium on Mathematical Foundations of Computer Science (2017).
EATCS Best Paper Award.
LIPIcs, Schloss Dagstuhl – Leibniz-Zentrum für Informatik.

17:10

Paul Brunet

Bracket Algebras: a nominal theory of interleaved scopes

ABSTRACT.
Nominal techniques have received a lot of attention in recent years, as
they describe in a simple yet precise manner phenomena related to
variable renaming, like alpha-equivalence, scopes, locality... However,
they seem to suffer from two shortcomings: they are not yet able to
handle interleaved scopes, and do not apply well to relational semantics
of programs.
In this talk I will describe an ongoing project to provide an algebraic
treatment of the semantics of programming languages which contain
explicitly allocation and de-allocation binders, inspired by previous
work of Gabbay, Ghica and Petrişan. I will present the initial setup,
defining a notion of alpha-equivalence of traces, and briefly describe
how this may be extended to regular languages. I will present some
decidability results we obtained, and hint at future investigations.
Most of these developments have been formalised in Coq.
This is joint work with Alexandra Silva and Daniela Petrisan.

ABSTRACT.
We introduce a logic to express structural properties of automata with
string inputs and, possibly, outputs in some monoid. In this logic, the
set of predicates talking about the output values is parametric, and we
provide sufficient conditions on the predicates under which the
model-checking problem is decidable. We then consider three particular
automata models (finite automata, transducers and automata weighted by
integers) and instantiate the generic logic for each of them. We give
tight complexity results for the three logics with respect to the
model-checking problem, depending on whether the formula is fixed or
not. We study the expressiveness of our logics by expressing classical
structural patterns characterising for instance finite ambiguity and
polynomial ambiguity in the case of finite automata, determinisability
and finite-valuedness in the case of transducers and automata weighted
by integers. As a consequence of our complexity results, we directly
obtain that these classical properties can be decided in polynomial
time.

16:50

Marie Fortin

It Is Easy to Be Wise After the Event: Communicating Finite-State Machines Capture First-Order Logic with "Happened Before"

ABSTRACT.
Message sequence charts (MSCs) naturally arise as executions of
communicating finite-state machines (CFMs), in which finite-state
processes exchange messages through unbounded FIFO channels. We study
the first-order logic of MSCs, featuring Lamport's happened-before
relation. Our main result states that every first-order sentence can be
transformed into an equivalent CFM. This answers an open question and
settles the exact relation between CFMs and fragments of monadic
second-order logic. As a byproduct, we obtain self-contained normal-form
constructions for first-order logic over MSCs (and, therefore, over
words). In particular, we show that first-order logic over MSCs has the
three-variable property.

This is joint work with Benedikt Bollig and Paul Gastin, accepted at CONCUR 2018.

17:00

Lorenzo Clemente

Binary reachability of timed pushdown automata via quantifier elimination and cyclic order atoms

ABSTRACT.
We study an expressive model of timed pushdown automata extended with
modular and fractional clock constraints. We show that the binary
reachability relation is effectively expressible in hybrid linear
arithmetic with a rational and an integer sort. This subsumes analogous
expressibility results previously known for finite and pushdown timed
automata with untimed stack. As key technical tools, we use quantifier
elimination for a fragment of hybrid linear arithmetic and for cyclic
order atoms, and a reduction to register pushdown automata over cyclic
order atoms.
This is joint work with S. Lasota.

17:10

Vincent Michielini

Uniformization Problem for Variants of First Order Logic over Words

ABSTRACT.
We study the uniformization problem for natural variants of the first
order logic over finite words. We show that none of them has the
uniformization property, as witnessed by proposed counterexamples.

ABSTRACT.
This work resolves the question whether languages recognisable by
unambiguous parity tree automata are of bounded Rabin-Mostowski index.
The answer is negative, as exhibited by a family of examples. The fact
that the provided languages are unambiguous is rather simple. However,
much less clear is why these languages cannot be recognised by
alternating parity tree automata of bounded Rabin-Mostowski index. The
proof of this fact relies on new constructions regarding signatures of
parity games.

16:50

Denis Kuperberg

Is your automaton good for playing games ?

ABSTRACT.
Good-for-Games automata are non-deterministic automata on infinite
words, where the non-determinism choices can be resolved knowing only
the prefix of the input word read so far. They represent an interesting
compromise between determinism and non-determinism, as they retain
desirable properties from both formalisms.
One of the main open questions about this model is the following
problem: what is the complexity of deciding whether an input parity
automaton is Good-for-Games ?
I will present the latest advances on this subject, and if time permits,
related notions that emerged in the pursuit of this goal.

17:00

Simon Iosti

Eventually safe languages and Good-for-Games automata

ABSTRACT. (joint work with Denis Kuperberg)

Good-for-Games (GFG) automata are a type of nondeterministic automata
with a strategy resolving the nondeterminism depending only on the word
read so far. These automata are suitable for avoiding the
determinization step in the standard solution to the controller
synthesis problem (the problem of automatically synthesizing a system
satisfying a given specification). It is known that in some cases (in
particular : in the co-Büchi case) these automata can be exponentially
smaller than their deterministic equivalents, and hence could be an
appropriate approach to reduce the doubly exponential cost of controller
synthesis.

This talk will present a family of languages with this property of
GFG succinctness, and definable in Linear Temporal Logic (LTL) - which
was not the case of the previously known examples. I will also present a
natural modification of LTL via a fixed point operator (the "Eventually
Safe Logic") that seems more suited for the GFG approach to the
controller synthesis problem, and allows for an algorithm producing GFG
co-Büchi automata equivalent to formulas in this logic ; this algorithm
still has a doubly-exponential state-blowup in the worst case, but
behaves well on the known examples where GFG automata are more succinct
than deterministic ones.

17:10

Daniel Hausmann

Permutation Games for the Weakly Aconjunctive mu-Calculus

ABSTRACT.
Satisfiability games for the $\mu$-calculus typically employ
determinized parity automata to track fixpoint formulas through
pre-tableaux. We introduce a natural notion of limit-deterministic
parity automata along with a construction that determinizes
limit-deterministic parity automata of size $n$ with $k$ priorities
through limit-deterministic Büchi automata to deterministic parity
automata of size $\mathcal{O}((nk)!)$ and with $\mathcal{O}(nk)$
priorities. The construction relies on limit-determinism to avoid the
full complexity of the Safra/Piterman-construction by using partial
permutations of states in place of Safra-Trees. By showing that
limit-deterministic parity automata can be used to recognize
unsuccessful branches in pre-tableaux for the weakly aconjunctive
$\mu$-calculus, we obtain satisfiability games of size
$\mathcal{O}((nk)!)$ with $\mathcal{O}(nk)$ priorities for weakly
aconjunctive input formulas of size $n$ and alternation-depth $k$. A
prototypical implementation that employs a tableau-based global caching
algorithm to solve these games on-the-fly shows promising initial
results.

This work has been published as ``Permutation Games for the Weakly
Aconjunctive mu-Calculus'' at TACAS 2018 by Daniel Hausmann, Lutz
Schr\"oder and Hans-Peter Deifel
(https://link.springer.com/chapter/10.1007/978-3-319-89963-3_21).