ICE 2016: public reviews

Updated automatically every 5 minutes

Accepted papers

A Note on the Full Expressiveness of BIP (short paper)

A modular formalization of reversibility for concurrent models and languages

Architecture Diagrams : A Graphical Language for Architecture Style Specification

A step towards checking security in IoT

Functional BIP: Embedding Connectors in Functional Programming Languages (brief announcement)

Worlds of Events: Deduction with Partial Knowledge about Causality

An Erlang Implementation of Multiparty Session Actors

On the Emptiness Problem of Subzero Automata

Modeling Concurrency and Reconfiguration in Vehicular Systems: A pi-Calculus Approach

An abstract semantics of global models of choreographies

This short paper provides a formal analysis of the expressiveness of BIP, a component-based framework for the design of concurrent systems. The authors first introduce, at an high level of abstraction, meta-concepts for arbitrary component-based frameworks, like the flattening property or the strong and weak expressiveness. Then, they recall the basic theory of BIP in order to show then that BIP is neither strong expressive nor has the aforementioned flattening property. The main theorem (Theorem 4.1) shows that BIP is weak expressive by introducing two (kind of) "universal" interaction and priority models.

The paper is interesting and perfectly fits in the scope of ICE. For this reason, I suggest the acceptance. On the other hand, I have found the presentation a bit sloppy (see the comments below) and the notation rather confusing: from an abstract point of view, BIP is nothing more than a process algebra, why should not BIP uniform to the conventional notations adopted in that area? I guess that this could help in (a) attracting process algebra people toward BIP, (b) understanding more deeply the relationship between BIP and process algebra and (c), which is my main concern about this paper, compare the questions of expressiveness of BIP with those that have been extensively studied in process algebras (see, e.g., some of the many papers by Daniele Gorla and the references therein). How does this paper relate to the question of expressiveness studied in Concurrency Theory in (at least) the last 20 years? This is not clear and, in my opinion, it is a bit of a shame...

Minor Comments

In Page 2, the way you introduce the algebra A is rather weird. Grammars have been used since the dawn of computer science. Please uniform to it: the C_i should be A. The operators "f" should be specified to be n-ary and the G should be "a signature". (one could even says that A is the G-algebra freely generated by B).

At the end of Section 2, a bit more of explanation could be useful to understand the difference between strong and weak expressiveness.

In Definition 3.1, In Labeled Transition Systems, often labels can denote also internal moves (tau). in BIP, this would mean having an empty set of ports. Why your definition is discarding the empty set of ports?

Page 3, interaction model: What is "n"? What are the "Q_i"? What are the "P_i"? (I managed to understand this only after reading the end of this page).

Page 3, Figure 1. In my pdf, there are no captions (a), (b) and (c)

Section 4: A bit more of discussion about the four rules, it could be useful, rather than referring the reader to [1].

This paper is a development on a previously published paper dealing with the algebraic formalization of the BIP framework. The previous paper defined a format of SOS semantics rule which correspond to any rule used to define BIP glue operator, and showed that there exists set of rules satisfying this format that cannot be expressed as a BIP glue operator.

This paper formally describes the property of /strong full expressiveness/ with respect to a set of operators which states that for any operator of this given set, there exists a glue operator whose semantics match the one of the given operator

A weakened version of this property, the /weak full expressiveness/ states that a combination of operator can be used to match the semantics of the targeted operator.

The paper provide a constructive proof that the BIP classic semantics is /weakly fully expressive/ with respect to the set of operators buildable from the rule format specified in the original paper, and that at most two BIP glue operators are necessary to build an operator

described by the given rule format.

The construction seems to be correct. This result ensures that the characterization of component composition given in the initial paper can always be implemented by a BIP system which seems to be a valuable result for those interested in the formalization of component-based model.

The paper lakes some hint and intuition and is rather hard to follow for non-initiated reader, but with the author's commitment to clarify things on the final version, I think the paper is acceptable.

This short paper formally defines the notion of full expressiveness, a notion previously introduced and discussed in the context of component-based architectures (and a particular formalism thereof - BIP) but never formalised. The authors define two notions -- strong and weak full expressiveness -- and then show that BIP does not satisfy the strong property (something that had already appeared in previous work) but satisfies the weaker notion. The ideas in the paper are quite specific to BIP. The ICE community is familiar with the formalism and also the previous work by Bliudze and collaborators so I think it is the right venue for this type of subtle technical discussion.

This paper proposes an abstract framework for reasoning about reversible version of concurrent models (such as CCS). The authors motivate their approach, give an abstract framework based on labelled transition systems and evaluate their approach on CCS and X-Machines.

In a few words, the key idea of the technique to make a model reversible is to make the labels of the LTS more "informative" so that one always know where we come from (and hence we can reverse the last action).

I think this paper is very good: it is very well and clearly written, the theory is rather neat and most proofs are available (and readable). The presentation is also at the right level of details,

making it understandable by most readers who are familiar with, e.g., CCS but not particularly familiar with reversibility.

The evaluations of the framework on CCS and X-Machines are also well explained and give a convincing argument of the usefulness of the framework.

I think this paper would be a great addition to this year's edition of ICE.

This paper introduces a formal framework for deriving causal-consistent reversible from a model of concurrency. The main contribution consists in defining a process that allows to extend an existing model to a causal-consistent reversible one. The process consists in refining the labels of the model to include only the information necessary to obtain reversibility, then specify a relation between those labels that represents which actions are able to run concurrently and finally use the refined labels and the relation to define a new causal-consistent reversible model.

In section 1 the authors introduce the concept of reversibility and they argue about the lack of frameworks to automatically (or semi) derive causal-consistent reversible models. In section 2 they explain their approach informally by describing their approach which then is formalized in section 3. Finally section 4 and 5 apply the framework to derive for the case of CCS and X-machines. Section 6 concludes by claiming that this method can be applied to a more general set of formalisms than the existing techniques in the literature.

This paper advances the research in reversibility by providing a (semi) automatic technique for

deriving causal-consistent reversible models. I believe that the audience at ICE will greatly

appreciate the contributions of this paper therefore I think it should be accepted. However, I am giving it a weak accept because in my opinion the paper can be improved in terms of presentation. Namely, although the framework states the steps to obtain a reversible model it would be very useful to include more intuition on "how" to perform each step. Moreover, it would be easier to apply the elements developed in section 3 if it explained the reasoning behind the Theorems and Lemmata.

This paper proposes a methodology for obtaining causally consistent reversibility LTS by performing two steps:

- Obtain an equivalent LTS that is (i) deterministic, (ii) co-deterministic (i.e., P’-l->P and P’’-l->P implies P’ = P’’) and (iii) enjoys the co-diamond property (causal independent reductions commute).
- Define a reversible version of the original LTS by taking states as tuples ([L],M), where [L] is equivalence class of an execution (i.e., a Mazurkiewicz trace) and M is the reached state.

It is shown that the above construction produces a conservative extension of the original system with reversibility (by showing the the loop lemma holds). Beside, the obtained LTS enjoys causal consistent reversibility.

Then, the paper illustrates the application of the approach in two different formalisms: CCS (with guarded choices and recursion) and X machines.

This paper presents an interesting approach for a relevant problem. I like the characterisation (given in theorem 1) of directly reversible systems and the fact that many standard results are ensured for such LTS. The remaining challenge is how to obtain an ‘equivalent’ system that enjoys the requested properties. The methodology does not help too much on doing this. Nevertheless, it illustrates how to refine two different formalisms.

The paper is very well-written, the results and the technical content are clearly presented. Finally, I think that the contribution fits well within the scope of ICE and therefore I recommend its acceptance.

Architectures provide a high-level description of the components of a system and their interconnections. Very often, one is interested in describing families of architectures having particular shapes and constraining actual architectures to adhere to such shapes. Such families are known as architecture styles and several languages have been proposed to specify them. In the field of software engineering UML diagrams and OCL rules are the reference model. From a more foundational perspective, several approaches exist based on the observation that architectures are graphs. Following this line, architecture styles are thus sets of graphs which can be described, for instance, using (graph) grammars, (graph) algebras, (graph) logics, constraints (on graphs), and so on.

This paper belongs to this line of research. It presents a graphical language to specify architectural styles which is close to UML diagrams but with a richer set of constraint annotations, and with formal semantics. The particular kind of architectures considered in the paper are sets of components interconnected through n-ary connectors attached to component ports. Such architectures are visually described with standard boxes-and-lines diagrams. The language for describing an architecture styles consists as well of boxes-and-lines diagrams but enriched with constraints on the number of component instances, the arity of connectors, the number of connectors attached to a port, and so on. This allows one to specify architectural styles that are cannot be specified with ordinary UML diagrams. The paper provides several examples of those styles, namely variations of client-server architectures, star architectures, Map Reduce and so on.

Two versions of the language are presented: a simple one in which constraints are constant numbers and a more expressive one in which constraints are intervals. For each of these languages, syntax, semantics and synthesis techniques are presented. The synthesis technique consists of extracting the constraints from the diagram specification as a set of (in)equations. Solutions to the (in)equations correspond to style-conformant architectures. The size of the equational systems is however not polynomial with respect to the size of the architecture style diagram. A polynomial-time algorithm for checking that an architecture conforms to a style is briefly presented. The paper includes a discussion with respect to existing works, in particular UML diagrams, architecture description languages and other approaches based on graph grammars.

The presentation of the paper is good in general, with some imprecisions and ambiguities that can be easily solved (some of which have been discussed with the authors in the forum). However, I have two major remarks:

(1) I am not sure that the paper should present both the simple and interval diagrams. I appreciate the incremental introduction of concepts offered by this choice, but the reader needs to go twice through similar concepts and other issues in the paper are left with fewer space than they would deserve. I would suggest to present the interval diagrams only and use the additional space to enrich the discussion of other issues.

(2) The conformance algorithm is not described in enough detail. Section 4 is indeed one page of pseudo-code described by just a dozen of lines of text. It is also not clear why the version restricted to simple diagrams is presented, and not the general one that seems to be present in the technical report (according to the authors).

More generally, I wonder whether the authors could find an elegant and unifying way to define diagrams with constraints that could be easily instantiated in the two classes of constraints presented (constants and intervals) plus the ones under investigation (the arithmetic constraints mentioned in future work and in the forum).

The contribution of the paper can be assessed from two points of view: a practical one and a theoretical one. From the practical point of view, the language seems to propose an alternative to UML diagrams and other architecture description languages. It would seem that the main differentiating characteristics are the first-class treatment of n-ary connectors (over binary connectors such as associations in UML diagrams) and the related (multiplicity and single-choice) constraints. This characteristic allows one to specify architectural styles that one would not be able to specify with some of the existing languages, e.g. with bare UML diagrams and other ADLs. The authors do not mention if UML should/can be extended to adopt the richer constraints of the proposed language or if OCL can be used to encode those.

From the theoretical point of view, the language should be seen in comparison with other languages proposed to characterise classes of graphs, like logic-based approaches, graph grammars, and graph algebras. The related work discussion seems to be restricted to graph grammars only, while it would have been interesting to mention and briefly discuss logic-based approaches (including the author's previous work on configuration logics, Courcelle's MSO, spatial logic for graphs, etc.) and algebra-based approaches. The paper, however, does not contain any theoretical statement or result related to expressiveness. The only discussion with respect to grammar-based approaches is restricted to the fact that the author seminal paper [17] claimed that graph grammars are hard to understand. However, the authors themselves admitted in the forum that they do not agree with such statement.

All in all, the paper proposes an interesting extension of UML-like specifications of architecture styles that could be adopted in practice and that would be worth of further theoretical investigations with respect to expressivity and complexity. The paper could provide stimulating discussions in the workshop, in particular related to connector-based interaction and concurrency approaches such as BIP and REO which are quite popular in the ICE community. On the other hand, the approach is restricted to structural aspects, which may disappoint part of the audience who would expect behavioural aspects to be considered given the interest in protocols, protocol conformance, session types and all that.

The paper describes a simple language to represent structural constraints of architectural styles. The idea is to specify styles by imposing cardinality, degree, and multiplicity constraints on the interactions ports of component types. It is then possible to check that the constraints are consistent and to set a system of linear equations whose solutions correspond to configurations conform to the style. The paper gives a few examples and discusses a validity checking algorithm.

The paper positions itself in the research program of the group (cf [2,3,19]) of providing an architectural description language tailored on BIP's. The driver here seems to be simplicity: the authors' claim (in the introduction and conclusions) that existing approaches are not easy for practitioners.

Evaluation

Pros:

- -the paper strives for simplicity in the specification of software architecture
- -the paper is well-written and relevant to ICE

Cons:

- the simplicity of the approach has a trade-off in terms of expressiveness; the discussion in the forum highlighted that the framework is suitable in some application at ESA (mentioned in the paper), and more on this should be incorporated in the paper
- there is not enough evidence that the proposed approach copes well in complex scenarios; adding details on the ESA case studies could clarify this aspect.

What I liked most of the paper is indeed the simplicity of the idea. I discount the fact that the paper does not give evidence that practitioner are comfortable with this framework (a project with ESA is ongoing, see my comment on this below). However, I am not fully convinced about this claimed simplicity. In fact,

- the paper focuses on structural constraints only and
- how simple the approach actually is?

Let me start from 2. The examples in the paper are pretty simple. Hence, once the interplay of multiplicity, degree, and cardinality is clear, one can immediately start to draw diagrams. After all, what one needs to do is to decorate some boxes and wires with some natural numbers, right? In fact, this is fine. But, the problem is: how easy is to get this numbers right? Take the simple examples of figures 5 and 6; to come up with a binary synchronisation or with the other case some juggling with degrees and cardinalities of both ports is necessary. This is easy for this examples, but what about when you've connectors say involving 5-6 ports: would it be as simple to find the right numbers? My guess is that in complex cases one would end up to set inconsistent constraints most of the times. Anyway, I believe that the idea of using a 'UMLish' notation could potentially attract practitioners as the authors seem to aim to from the conclusions.

The paper is not very well presented and a few improvements have been suggested in the discussions on the forum for some more specific aspects. I will therefore comment on more general presentation issues.

A better use of space could have been made by starting directly with section 3 and may be commenting on the following aspects.

a. The paper does not make justice to the validity checking algorithms relegated at the end with basically no comment. I would have also appreciated a discussion on the complexity of resolving the systems of linear (in)equalities. Of course this is exponential, but may be there's some hope that in practice pathological cases are rare? Any preliminary evidence about this?

b. You're making several design choices that are not explicit and not obvious. For example, it seems to me that your choice of typing, imposes components to have given type T (in a 'good' architecture). This is reasonable if one thinks of ports as eg representing the interface of say an object or an API. However, it is not so obvious when ports represent say communication channels; may be one would like to have a component to be able to communicated on the channels represented by 2 different types of a diagram. To achieve this in your model you probably would need a 'proliferation' of types T which suitably connected realise the channels of interest.

c. You mention an ongoing project with ESA; examples from this domain would have actually been appealing. It would also be interesting to discuss how ESA staff judges the use of your language.

Finally, a suggestion. I have the feeling that one could mumble on how to use your constraints (or add new ones) to reason about coupling. For instance, one could think of searching for solution that guaranteed some 'low level' of coupling. Or one could think of predicting the level of coupling in configurations from the simple architectural diagrams.

The paper presents a graphical language able to describe architectural styles. More precisely, it describes two kinds of architectural diagrams: ones (interval architectural diagrams) are an extension of the others (simple architectural diagrams) able to describe more complex architectural styles). Since practitioners mostly use UML, the authors decided to adopt a similar notation, but they rely only on a small set of notions to emphasize the conceptual clarity of the proposed approach. One of the main characteristic of the approach is to rely on n-ary connectors instead of using binary ones as generally done in other graphical notations.

The authors gives conditions for guaranteeing the consistency of both types of diagrams (i.e., guaranteeing that a diagram represents an architecture) and present a conformance algorithm working in polynomial-time for simple diagrams (and provide the extended version of the algorithms, also working in polynomial-time, in an online available technical report.

I really enjoyed reading the paper that sometimes (not very often) fails in giving the intuition behind a proposition or a definition, but is in general clear and is full of examples. I liked the incremental way in which the authors introduced the two languages even if this generates some redundancy and I think that the little explanation given for the algorithm should be extended.

My only main concern w.r.t. this paper is the actual contribution of the paper and the motivations that are behind it. From the introduction and related work sections, it seems that the main reason to introduce a new graphical language is that the existing ones are not good enough (lack of formal semantics) and non graphical ones are too difficult to be approached by practitioners. This generated the idea of using an UML-like notation with formal semantics. However, if this is the case and if the many attempt to give a formal semantics to UML failed (i.e., none of them as not be adopted as a standard by practitioner), why should this language’s fate be different? Even if practical relevance might not be a main concern for ICE, if the motivation behind this work is to have a formally defined language simple enough to be adopted by practitioners, I recommend the authors to conduct an empirical evaluation of the usability of the language.

An interesting technical contribution is the expressiveness of the language and how it differs from existing approaches (graph-based approached, Alloy-based approached, …). The related work section (I guess for space limitation) does not properly address this issue. I suggest the authors to try to include a more comprehensive discussion in the final version of the paper.

The paper extends the IoT-LySa calculus with symmetric key encryption and generalizes an existing control flow analysis (CFA) to deal with this new construct and to over-approximate actuator actions. The paper presents a few example security properties supported by the CFA: secrecy and a simple multi-level security policy.

This is the companion paper of a work accepted at Coordination 2016. Based on the CfP, the paper is in scope under the category: "a full paper dedicated to the use case you could not fit entirely in your Discotec paper". My evaluation is thus focused on the "use case" aspect of the present paper: the calculus and the CFA have already been evaluated by the Coordination reviewers.

The paper is well-written and quite a nice read as a whole, but I have two main concerns about the new original contributions:

- I find the security analysis a bit crude. Encryptions are always considered public, but this is only sound as long as key leakage does not happen at runtime. If one encrypts a secret under a key known to the attacker and deems the corresponding encryption as public, the secret is leaked. This is not captured by the analysis in the paper. Indeed, the analysis lacks all the subtleties typically needed to statically deal with arbitrary attackers, which would significantly complicate the analysis specification. Concretely, the paper lacks an "opponent typability" result, proving that any opponent is accepted by the analysis under a "sufficiently weak" analysis environment. Though this limitation is somewhat stated in the paper, it is not a detail from a security perspective and it severely limits the contributions of the work
- The security analysis is based on a set of results which allow the verification of safety properties as a corollary of the soundness (over-approximation) of the CFA. I have mixed feelings about these results: on the one hand, they suggest the analysis is quite general; on the other hand, this is largely expected, since the analysis is a rather standard CFA. The proof of the results look straightforward and experienced readers can imagine all the results as a corollary of the CFA. This is mostly due to the targeted security properties being too limited

To sum up, I find the security analysis not very inspiring and insufficient in terms of provided security guarantees. This does not mean I do not see potential in this line of work, but I think there are not enough new interesting contributions in this submission to justify publication.

This paper builds on authors' previous work that sets forth a process calculus for Internet of Things applications.

As novel contributions for this paper, the authors:

- Add cryptographic primitives to the calculus.
- Extend the already existing static analysis for the calculus to track which actions are triggered by whom.
- Demonstrate that the output of the mentioned static analysis can be inspected for security-related analysis.

Evaluation

The topic addressed by the paper is definitely in the scope of ICE. The overall contribution is a bit mild, in particular:

about Contribution 1:

This addition turned out to be very easy to make. Furthermore, the new calculus is simply presented and not demonstrated with use cases.

about Contribution 2:

The calculus is not mobile in spirit. Because of this, in communications we have that locations, actions and receivers (here, actuators) are completely hard-coded in the program. Therefore the analysis of Contribution 2 reduces to syntactically spot <j, \gamma>.P inside a (l : [ ... ] ) and save j, \gamma and l. My impression is that the problem being solved is not particularly challenging for the calculus at hand.

about Contribution 3:

The way the static analysis tracks messages and values is nice, but it is a contribution of a previous paper. Here, the authors leverage on labelling the content of the output of such static analysis to perform two security-related analysis'. My impression is that this is quite a basic analysis.

I have found extremely useful that the authors are clear upfront about the preliminary status of this work. This seems indeed a good starting point. Although I am not much motivated towards the acceptance of this paper, I would not be opposed to it.

Overview

This paper addresses the issue of checking action tracking and security properties for models of IoT systems. The focus is on static analysis, in particular Control Flow Analysis (CFA). The model and CFA in this paper arise as an orthogonal extension of a model and CFA recently put forward in a COORDINATION’16 paper [6].

The main novelties of this full paper wrt [6] are:

a- an analysis of actuators’ behavior (i.e., the actions that a given actuator can perform in a given system node) and

b- cryptographic primitives, present at the level of processes and (encrypted) terms, which allow to analyse safe information flow properties.

The formal model describes compositions of nodes. In turn, a node contains a store, processes, sensors, and actuators. The dynamics of this model is given by a reduction semantics, also given in two levels.

The CFA extends that in [6] with an extra component \alpha for action tracking (cf. (a) above).

Correctness of the analysis is ensured by Theorems 4.1 (Subject Reduction, mostly as usual), 4.5 (action tracking), 5.5 (confinement - all values are public), and 5.8 (safe communication / correct information flow).

Evaluation

This is an interesting paper, which fits well in the scope of the workshop, also because it nicely complements the paper presentation for [6]. Indeed, this work is admittedly incremental wrt [6]. Incrementality can have positive and negative readings, I guess, but I prefer to take the positive side: the extensions to the analysis in [6] presented here serve to support the generality of the model, approach, and CFA presented in [6].

My impression is that the underlying formal model is somewhat contrived; that is, the calculus is not particularly minimal, nor easy to digest. However, this is not a serious issue, in my opinion. The paper itself is clear; the level of detail is appropriate, and the theorems seem plausible yet not extremely surprising in formulation nor details. A number of technical infelicities in presentation (and suggestions for improvement) have been already pointed out by other reviewers in the forum - the prompt, detailed answers from the authors have been quite useful to me.

The presentation is overly incremental wrt [6]; this should be corrected in a revised version. I would have liked to see a discussion (in Section 2) of the targeted properties (action tracking, confinement, safety), including comments on why the CFA in [6] is inadequate for analyzing such properties. This may sound obvious, but it could be reassuring for an occasional reader, also because the current discussion is vague, and too focused on the (very simple) example. Also, the presentation in Section 3 could be streamlined/simplified with explicit mentions to elements new to this presentation (i.e., elements not in [6]).

Although I certainly see the merit in the line of work on process algebraic approaches to IoT analysis (as in this paper, but also in other recent works), I would have also appreciated some discussion on its practical applicability. Based on some (brief) exchanges in the forum, the authors seem to have interesting insights on these practical perspectives.

This short paper presents an implementation of BIP in functional programming. In particular, Haskell and Scala libraries have been developed for executing BIP applications. While an imperative implementation for BIP already exists, obtaining a (purely) functional one poses new challenges.

The paper starts with a quick background on BIP, giving a short (partly formal) semantics of connectors. Then, the algebra of connectors is mapped to functional programming, describing the implementation. A few examples are discussed.

Since the library code was made available, I had the opportunity to have a cursory look "under the hood". The Functional BIP library makes use of Haskell/Scala types so to ensure, at compile time, that BIP connectors are composed in a type-safe way. The library also fits nicely in the Haskell ecosystem, by providing standard type class instances (Profunctor, Monad, ...) which can help a Haskell programmer to quickly grasp the basic structure of the connector type. In Scala, variance annotations are also exploited to model the profunctor-iality of the connectors.

Overall, I believe that the paper provides a nice contribution, and that it presents it in a clear and readable way.

The paper presents implementations of the BIP framework in two functional languages, Haskell and Scala. They provide connector builders that can be composed to build interactions. The set of predefined connector seems to be sufficient to build any stateless interaction between atoms.

The main contribution of these new implementations is bring dynamic reconfiguration of connectors, which is not part of the legacy BIP framework and is made possible here thanks to the use of higher order functions. It would have been great to have an example that uses the joined operator, which seems to bring a new world of possibility to the BIP framework.

The articulation between global ports used in atoms and the binding could be discussed more thoroughly to answer the following questions:

What does it produces to bind an atom to a port it doesn't use or conversely to have an atom whose port is not bound anywhere?

Is it possible/desirable to statically detect such situations ? (probably not in the case where dynamic reconfiguration is involved)

This "brief" nicely summarises the efforts of the authors to provide Haskell and Scala implementations of BIP. I find the approach elegant as it makes it possible to effectively experiment with BIP constructions. One of the most appealing features is the "dynamicity" that comes with the so called "dynamic combinators".

The paper presents a model for events causality in distributed systems which separate some local events from the global world; and it provides a way to infer relations of causality and concurrency between events (both global and local). The local events are said to belong to a microcosm, and are chosen such that they constitute a total poset (respect to the causality relation). A microcosm can be told some pieces of information from the global world and can uses them to infer relations.

The main results are the following:

- The relations inferred in the microcosm approximate the ones in the global world (since its knowledge is partial), and are never contradictory with it.
- A microcosm is proved to be consistent, meaning that every two events, only one relation is found for them.
- Two microcosm belonging to the same world do not infer things which are contradictory.
- The order in which information is added is not relevant.

The results are nice and clean, however they assume that every new piece of information added to the microcosm does not violate a certain property of the microcosm itself. A potential limitation of the work is that currently there is no result showing how this property can be verified. Solving this open issue seems to be essential to make the approach usable in practice.

The topic of causality is important and there is quite some ongoing research on it. Consequently, the effort the authors put into bringing their part of contribution to it is highly appreciated. The present paper proposes a deduction based approach for reasoning about causality, more specifically, about the orderings of events (in the sense of Lamport). The main concept the authors exploit is that of partiality and this, on its own, is a nice and reasonable idea. The authors could further explore more expressive frameworks for reasoning about causality in the sense of "something is wrong, what might be the cause" which needs more than ordering events. An algorithmic setup for such a framework (based on counter-factuals) has been provided in "Causality Analysis in Contract Violation", RV'10.

The authors could also consider addressing the complexity of their (on/off)line decision making since this is, and the authors do acknowledge it in the introduction, important.

This paper establishes a proof-theoretical form of deduction in the presence of partial knowledge, where the object of deduction is a pair of relations comprising a partial order (causal information) and a "concurrent" relation between events.

A wide array of results is presented, ranging from computability to two definitions of bisimulation ("forward" and "backward").

Causality is a huge topic in Computer Science, possibly stemming from universally accepted models such as Petri nets.

Deduction, especially in the presence of partial knowledge, is even bigger; Artificial Intelligence is one of the ancestors of the subject.

There is plenty of general interest on both subjects. Therefore my impression is that deduction about causality, and the research proposed in this work, may very well set a trend in Computer Science, especially in the application domains envisaged by the authors (fault-tolerant services).

Overview

This paper presents an implementation of a monitoring framework for multiparty session types for Erlang systems. Erlang applications tend to be composed of distributed communicating components -- indeed, the language was designed to make distribution simple. In this setting,

it is natural to desire to monitor communication, and check if components are following the intended protocol.

Multiparty session types provide for a modelling language to specify the global interaction between components and reason about it. Further, under certain conditions, the global specification ("global type") can be projected to local specifications ("local types") describing what each component should perform. The monitoring system presented in this paper exploits such projections to derive finite-state machines (CFSMs) which can be exploited for monitoring.

This work was inspired by Neykova and Yoshida which designed a monitoring system based on session types for Python processes. This paper brings these ideas in the context of a purely functional language as Erlang, which has no mutation, nor shared memory which can

be used to drive coordination.

The paper describes the software monitoring architecture, also through a running example (a chat server). Initially, protocols (global types) are defined in the Scribble language. This specification language also allows for sub-sessions to be created and run. The failure of such sub-sessions (e.g. remote crash or protocol violation) can be detected in Scribble, and acted upon.

Then, Scribble protocols are projected into local types, so that roles and monitors can be defined accordingly. A coordinator process is spawned so that processes/actors can register for roles, and try to initiate a session of the protocol. After a session is started, each actor interacts with others through a "conversation" set of functions, which perform the monitoring check before the actual communication happens.

Each actor has its own local monitor, which is a separate process. Monitors are optimized using nested FSM (Hu et al.), ensuring that the generated monitors have polynomial size even when the global type used parallel composition. Protocol violations are handled according to

the Erlang "let it fail" philosophy, where agents can simply crash (and, if needed, be restarted). Then, some failure detection mechanisms are used to make other actors aware of the failure. In

general, both push-based and pull-based failure detection are needed to detect failures, so the implementation uses both.

The paper also evaluates the performance of the implementation through benchmarks, estimating the cost of monitoring.

Evaluation

I consider the topic to be interesting, in general. I found the design of the monitoring system to have some simple/straightforward parts ("exactly what one should expect") and some less trivial parts (e.g. the failure detection systems).

Overall, the contribution is good, albeit not very strong in my view. It is nice to see some effort for bringing theory to practice, especially towards actor-based languages such as Erlang.

I believe it would be interesting, in implementation-based papers such as this one, to have some (semi-)formal claim of correctness of the implementation. For instance, one might wonder under which conditions we can obtain (some form of) global progress -- and possibly, how the

"let it fail" approach affects it. This kind of result would provide a better bridge between theory and practice.

While the current presentation is fine, some points are not very clear, as we already discussed in the ICE forums. However, I have full confidence that these can be fixed, with a modest effort.

This paper presents and discusses the implementation (monitored-session-erlang) for runtime monitoring Erlang/OTP server applications against the communication protocol given by multiparty session types.

Local protocols are projected from the global types and are used in runtime monitoring for checking the session messages against communication protocols. Exception handling are an interesting part of the paper. Two different methods for failure detection are explained and compared. The authors motivate the importance to structure possibly-failing sessions. Erlang implementation of monitor-session-erlang, the chat server and the DNS server are available online. An Erlang DNS server case study is evaluated.

The author proposes the use of multi-party session types to verify the communication protocol of Erlang programs. To this end, the author follows the standard approach of verifying multi-party session types at runtime: a monitor process checks the message order and message signature for incoming and outgoing messages of a given process. Technically, the global protocol (a type) is projected as a local protocol that represents the communication

from the point of view of a process. An interesting interplay between the theory of multi-party session types and Erlang programming is the problem of fault-tolerance: Erlang processes, which represent roles in multi-party session types, may crash, so the authors propose

the use exception handling through the use of subsessions.

The paper is interesting, well written, and well structured. The presentation and examples are helpful and appear at the right place. To me, the contribution of this paper is the open source tool along with examples. The weakest points of the paper are:

- The paper was not effective at showcasing the original contributions. Is it the tool? Is Scribble extension? State these in a visually distinguishable manner in the introduction section.
- The paper should be clearer on distinguishing itself from [18].
- Distinguish engineering differences from fundamental differences, as these are important for different people.
- Regarding Section 4.2, the author should be clearer on discussing the impact that of the proposed extension in the theory of multi-party session types. Is this an extension? And how can we be sure that this extension is safe?
- The author should include some discussion of future work.

Overview

This paper proposes an actor framework where actors hold several active (concurrent) objects which can access the actor's mailbox. The overall framework is not much different from the general actor idea, as far as I can tell, except from the fact that each method offered by an actor may specify a lock which will enforce some temporal properties between different calls/invocations on methods that refer to the same lock.

Evaluation

I feel like this paper has some potential, but somewhat fails at showing it clearly. It is a rather complete work, in the sense that it gives a language, with features that look new (to me), an

implementation, a fully implemented example, and some benchmarks.

The context of this paper is relatively well explained by the authors, however it could be made clearer -from the very start- how their approach differentiate itself from other actor-based

languages. Indeed there's been *a lot* of new such languages in the recent year, so one always wonder why a new one is required. The idea of having "locks" on method signatures comes out rather late while it is actually a fundamental piece of the paper.

Ideally, one would have liked to see the framework proposed by the authors compared with other actor frameworks (such as the ones discussed in the related work section). I understand that this might not be realistic in the scope of a workshop paper, but some discussions about this would be welcome.

The paper mentions a type systems as a contribution of the paper, but it is not in the paper. The authors did say on the forum that it is "standard", but either way, it should be clarified whether it is or not part of the paper.

In general, I found the writing/presentation a bit confusing. Notably:

- The structure of the introduction is not so clear [why do we need more parallelism? better performances, I guess?].
- The motivating example section would benefit some pseudo code, or a figure.
- The syntax and semantics of MAC should be given in a less brutal manner, for instance by using (the) example(s) and/or splitting the rules so that they are closer to their explanations -- there is space for this.
- The section about the implementation needs to clarify WHAT you are implementing. One would think that you've implemented a compiler/parser/runtime for the MAC language but this is not the case. In fact, the implementation looks more like a Java API for MAC. Please give an overview of the implementation and explain how the language presented in the earlier section connects to that implementation.

This paper introduces multithreaded actors, which contains multiple active objects sharing a message queue. Compare with traditional actors, which process messages sequentially, multithread actors allows messages to be processed in parallel. This improves execution efficiency. The abstract class Actor and the class Message are implemented in Java. There was no link to download the implementation but the authors provide the implementation during the discussion in the forum. It will be useful to make the implementation available online for all the readers. Type system is mentioned in the paper but is not presented anywhere. The syntax and the semantics of the language is provided and a banking example explains the mechanism. I proposed this paper to be accepted.

The article presents the semantics and implementation of a framework that supports multi-threaded actors (MAC). Within each MAC there is a set of active objects that all share the same actor message queue and which process incoming messages concurrently.

I assume that this is so that there is a lighter-weight alternative to creating multiple actors so that system performance can be improved.

On p. 2 the approach is compared to PAM - "however in our proposed abstraction we want to avoid using a separate thread pool per actor as this is a large memory overhead especially in Java." But in Listing 4, p. 15, the pool of availableWorkers is a local object field of the Actor - the Bank actor in Listing 3, p. 12 is populating it with new ActiveObject instances of local class BankEmployee. So how is the separate thread pool per actor being avoided in MAC? To me it looks like you also have such a separate thread pool per actor.

I'm ashamed to admit that I didn't understand what "e?" stands for in the syntax (Figure 1, p. 4, "g ::= e? | b | b /\ b")- what would the construct "null ? " or " e ! m(e) ?" mean? Couldn't figure it from the operational semantics in Figures 2-4 (I'm probably missing something obvious here...).

On p. 10, it's stated "Execution is then forwarded to the mainExecutor which returns a Java Future to the user for synchronization with the rest of the application outside the actor.". Looking at the implementation of the Actor class (why is it abstract?) in Listing 4, p. 15, the mainExecutor doesn't return anything inside the run method - so where does this return to the user happen?

Furthermore it's stated "The selection of the ActiveObject is important to form the lambda expression that saves the application from having a significant number of suspended threads if the set of data that is required is locked." - where does this happen either? Couldn't figure it out by looking at Listing 4 or the Bank example in Listing 3 (p. 12). How would one define this selection in the Bank example so as to avoid this significant number of suspended threads? An example would be nice (I understand that one can define the priority for the items in the availableWorkers queue but don't understand how that definition can guard against the problem of significant number of suspended threads).

In Listing 3 (Bank Class on p. 12) I couldn't understand what is the purpose of the variadic arguments "parameters" of method getNewWorker. They don't seem to be used in the implementation of it - maybe the presentation could be simplified by removing these (from the Actor API as well).

The need for manually keeping track of busy/free workers in each method ("Bank.this.freeWorker(this);" or "Bank.this.freeWorker(this, n1, n2);", etc. in Listing 3, p. 12) is I think a major obstacle for writing correct programs - people will forget some of these/get them wrong for sure. Isn't there a way these could be done automatically?

I think that it'd help if you enabled line numbers on listings and referred to them in the text.

It'd help even more if you showed the "client" side of the Bank actor as well (at least part of it), so that it's easier to see how the Bank's functionality is being used in this framework.

Overview

The subject of this paper are subzero automata. They simplify zero automata, a class of automata on infinite binary trees used to study a probabilistic variant of weak MSO (called WMSO+\Forall_{\pi}^{=1}).

The main result of the paper is that determining whether a subzero automaton accepts some regular tree is decidable. The authors present this as a step towards decidability of WMSO+\Forall_{\pi}^{=1}: in order to obtain the full result, one should generalise regular trees to arbitrary trees and subzero to zero automata.

This generalisation is not discussed in the paper. Instead, the authors point out as main original contribution the proof method of their decidability result: while traditional approaches are based on combinatorics on graphs, here a deduction system is developed and used to characterise acceptance of some regular tree.

Evaluation

The motivating problem for this work, emptiness for subzero automata, is left unsolved, and the generalisation of the main result from regular to non-regular trees appears challenging. Nonetheless, the paper still presents quite a few elements of interest, most notably the novel proof theoretic method. I believe the deductive system introduced by the authors is intriguing and triggers a few other research questions, see my list at the very end of this review.

Unfortunately, I see as a big flaw of the paper the fact that the decidability result is not really proven. It is stated as a corollary of Theorem 19, but in my opinion discussion on the ICE forum, also involving one of the authors, demonstrated that details are not that trivial and need to be spelled out properly, especially because we are talking about the main result of the paper. In this situation, I cannot fully recommend acceptance of this paper.

I will conclude with a few questions of a more speculative kind, that I also suggested in the ICE forum.

- The proof-theoretic method seems quite simple and flexible. As it does not have any particularly bound to the probabilistic setting, it could be useful as well in other contexts. One obvious direction is standard parity automata, which are a particular case of subzero automata. The two Looping rules would probably simplify, or even merge. Also, it could be interesting to compare the complexity class of your method with respect to existing ones based on combinatorics.
- The lack of a result like Rabin's regularity theorem is indeed an interesting feature of subzero automata. Do you have an idea of how a subclass of subzero automata satisfying the regularity theorem would look like? Qualitative automata should yield one such subclass, but there should be more.
- Your notion of profile formally looks like a clause of logic programming. And the "Unification" rule looks like a cut rule in sequent calculus. At a later stage it may be interesting to explore these analogies. For instance, thinking of what cut-elimination could mean in your proof system.
- My wishful thinking is that you might be able to deal only with regular trees because the proof system is intended to work with finite proof derivations. There are a few proposals of proof systems in which it makes sense to consider infinite proof derivations (typically based on coinductive reasoning): I think that might be a direction to consider if you want to generalise to non-regular trees.
- Similarly, I was wondering about similarities between your deductive system and standard tableau methods for the modal mu-calculus.

Overview

One of the authors has previously related decidability of a fragment of a variant of MSO to the decidability of the emptiness problem for a kind of tree automata. In this paper, the authors tackle the decidability of emptiness for similar automata. They prove that "regular emptiness" is decidable, a step forward for deciding the full problem. They use a novel method of a deductive system to reason about "profiles", an abstraction of possibly infinite partial runs of the automaton.

Evaluation

The paper is very pleasant to read: it is well written, quite precise, and has a straightforward presentation. The result is incremental because it's only for regular trees, but the method looks nice and seems robust.

However, I have yet to be convinced that one can derive a decision procedure from the deductive system and the completeness proof. I have a very rough idea how it might be done, but the remaining amount of work to be done seems to me comparable than the completeness proof itself, and the short interaction through the forum failed to help me understand the problematic cases.

As this decision procedure is the very claim of the paper and is simply missing, I cannot recommend acceptance.

I give more details below on this problem and other comments.

* MAJOR CONCERN : DERIVING DECIDABILITY FROM COMPLETENESS

Using the completeness theorem, deciding regular emptiness can be done by deciding the existence of a derivation in the deductive system. A given automaton gives a trivial bound on the number of profiles, indeed, i.e. |Q|^2 * 2^|Q|, but the number of derivations is not bounded. In particular, the sets w can be both split and increased while going up, so I don't see how we avoid cycles.

The completeness proof could be used to prune the search space. For example in the base case when q \notin Q_all: even though partial runs of the given profile can be of any size, we can forbid using twice the same transition of the form p->(p,qi) or p->(qi,qj). Note that as far as I could get, the "N" bound given in the forum reply is not enough, sometimes you need a tree four times bigger, like if transitions are of the form p->(p,p) and p->(q1,q_i) for the profile p->{q1,...qn} you need 2n leaves and 2n nodes (n of which being p->(p,p) transitions).

Even when q \in Q_all, one can prune the search space to finite objects, as p leaves might just be left to be done by arrows back to the root.

But then come the inductive case: probably that one can build a finite enumeration (with the same w.l.o.g. considerations as above) of all trees with leaves covering the set {q1..qn}. The remaining leaves p1..pm should be then dealt with using enumerations of such trees, using for q leaves either recursion (if not in Q_all) or looping (if in Q_all), but the problem is that there might be an infinite number of such q leaves, the remaining partial runs in the completeness proof still being infinite. This latter infinity will be handled "later" in the induction with other loops or recursion, but the corresponding enumeration becomes too complex for me to be sure it makes sense.

Needless to say, this would need another round of review.

* USEFULNESS OF RESULT

One should address another question in the introduction: it would be useful to write somewhere if the non-empty language mentioned in "iff A_phi accepts a non-empty language" (Bojańczyk's proof) can be a non-empty language of only irregular trees.

The answer on the forum suggests that it might be the case, because of the only-irregular language L3 can be described by a conjunction of MSO formulas. If it is the case (but it's not obvious to me), then this decision procedure is less relevant for deciding truth in this variant of MSO.

Even if you don't know, I think that it is still worth mentioning here, as I assume the readers would wonder as well.

* PRESERVATION OF REGULARITY

page 14 : when getting \rho' from \rho_i we remove an infinite number of vertices. How can we know it's still regular in some way? You need to maintain the fact that for some q' in Qzero you have probability zero of infinite path with q', how can we be sure that removing an (infinite) number of subtrees will preserve this?

* QUALITATIVE + PARITY = SUBZERO ?

Proposition 9 says qualitative automaton are a special case of the authors' automata with Qall=Q.

First, this is worth mentioning in the introduction!

Second, if I understand correctly, choosing a different Qall adds "parity" to the class of automata, and this would mean that the authors' automata might be as well called "qualitative parity" or "parity qualitative", somehow. I don't really care about names, but if this makes sense in principle, it would be good for everyone to have this in the introduction.

Not only this would relate to existing work, but then you could explain how the combination of both criteria make the class of automata lose the regularity property. This, probably, makes your class of automata less elegant, but that's also worth mentioning.

I would suggest using "Regular Emptiness" instead of just "Emptiness" in the title, too, even if I admit that the abstract is clear enough on this matter.

This paper claims to prove decidability of emptiness for subzero automata. Zero automata are a special class of tree automata that the third author introduced in connection to Weak MSO + quantification restricted to finite sets. In particular, in previous work the third author showed that for every formula phi of the aforementioned logic there exists a zero automaton A_phi such that the following holds: phi is true iff A_phi accepts a non-empty language. Hence, if the emptiness problem for zero automata is decidable so is the logic. This would follow the classical Rabin strategy for MSO.

The relevance of the emptiness problem for zero automata is clear and definitely worthwhile studying. The topic of the paper is maybe slightly on the too theoretical side for ICE but I am a fan so I will support acceptance as long as the authors can fix two things:

- The paper is written quite in a hasty fashion and hence clarity has been compromised in many points. This needs to be fixed and I know the authors are very capable of doing so.... In particular, the proofs have to be fully spelled out (otherwise this makes more of a short paper that a full one).
- Footnote 1 has to be made concrete and precise. How likely is that you can solve the original problem? How interesting is the subclass you study here in case you cannot solve the original problem? What happens in terms of the logic?

Overview

This brief announcement of a currently submitted paper presents a modeling framework for systems of cooperating autonomous vehicles. The framework is composed by a low level to model the dynamics based on hybrid automata and an high level based on pi-calculus to model

communication. A case study of a vehicle joining a platoon is discussed.

Evaluation

Good points:

- the paper is clear and well-written
- formal modeling of realistic systems is relevant

Bad points:

- the model is used just for simulation, not for proving properties
- some more discussion of the model would help understanding its significance and the peculiarities of the chosen setting. For instance, are there similar models which apparently make sense, but do not work? Which are the challenges in building such a model? Are there models that work only for some low-level implementations? I would have expected timing issues to be important, why this is not the case? I think highlighting such a kind of relations with the specific domain would be of interest for the ICE audience, more familiar with pi-calculus than with the automotive setting.

Overview

The authors describe an approach for modeling reconfiguring vehicular systems where vehicles communicate and communication relationships between cars change over time.

The approach is based mainly on the pi-calculus, although also an interface to models for describing more low-level dynamics of each vehicle can be described.

Evaluation

Points in favor:

- It is an interesting application for the pi-calculus
- Discussions about the modeling methodology at ICE could be very interesting.

Points of critique:

- Several parts of the modeling methodology need clarification:

- why is it good to combine high level communication and reconfiguration behavior models with "low-level" vehicular dynamics models? I suppose that the latter models are only relevant for simulation and less for formal proofs (?)
- The paper shows parts of process specifications derived straight from a vague description of the system's behavior. I'm having difficulties in understanding the intuition behind the models shown. In my opinion, there are many requirements/systems engineering questions to be clarified first, like where is the boundary of the system with its environment? What are the shared phenomena? What are assumptions on the environment?

I think the paper could be improved by describing a bit more the relevant behavioral aspects of vehicular systems, and also properties to be.

This paper presents a model for solving a reconfiguration problem in the context of autonomous vehicles. The main contribution consists in a pi-calculus model that corresponds to the high-level interaction of the vehicles in order to achieve a platoon pattern.

I think that the contents of this brief announcement are well suited for ICE, however I would advice the authors to emphasize in the pi-calculus model itself. Namely, aspects such as challenges, limitations, related work and/or correctness are of great interest for this kind of audience.

This paper introduces a semantics for the global view of choreographies. Global views are generated by a syntax and inductively assigned to relations between sets of events, combinatorially represented as hypergraphs. The authors propose a notion, called ``reflection'', that allows to define active and passive participants in a choice: it plays a key role in giving a sound semantics of the choice operator, via a ``well-branching'' condition.

The last part of the paper relates the global view semantics with the local view of choreographies, here expressed in terms of communicating finite state machines. The authors demonstrate adequacy by showing that, if the semantics [[G]] of a global view G is defined, then its local projections form a deadlock-free system. A second theorem shows that projections yield refinements of global views, i.e. there is an inclusion of execution languages. This result substantiates authors' claim that, contrarily to other approaches, their account of global views is more abstract and has an independent status with respect to the semantics of local views.

This paper appears as a well-rounded contribution resting on well pondered methodological principles, illustrated by the authors in the first two sections. My only complain is that the technical developments of the hypergraph semantics are rather intricate; I easily got lost with the many caveats, notations and auxiliary definitions. Admittedly, I am not particularly knowledgeable of choreographies, so it may very well be that this level of sophistication is the standard in this research area. All I can say is that I found the technicalities quite heavy.

As I mentioned in the ICE forum, my wishful impression is that the hypergraph semantics may turn out to be simpler and its caveats more canonical if formulated in categorical terms. The ordering information of an hypergraph H (max, min, lst, fst) might be encoded in terms of a cospan L --> H <--- R, with L and R representing left and right interface of H, along the same lines of the work of Rosebrugh, Sabadini, and Walters ``Generic commutative separable algebras and cospans of graphs" (2005). The fact that K-events represent non-observable actions would amount to them never being part of the interfaces L and R. Then, one could attempt to formulate the gluing condition defining seq(R,R') as composition of these cospans, so that they form a category.

Speaking of improvements that are feasible in a proceedings version of the paper, I would suggest to ease the understanding of sections 3-4-5 by splitting and anticipating existing examples, so that they are more interleaved with the technical developments. Also, it may be useful to integrate the various definitions of those sections with intuitive explanations.

The paper introduces an abstract semantics of choreographies in terms of hypergraphs.

A choreography can be represented as a directed acyclic graph with a single entry/exit point and defines the actions that the participants execute to coordinate each other.

In particular, the semantics of a choreography can be seen as a partial order representing the dependencies among the actions specified in the choreography.

In Section 7 authors claim the two main results of the paper.

The paper is well written and the level of technical details is high and appropriate for ICE. This sometimes does not help the non expert reader to get properly the full content of the paper. My main concern is that it is not clear how the two theorems can help the design of choreographies, in particular the second theorem. So, I suggest the authors to add some comments on this and relate the results of Section 7 to what they claim in the introduction.

The authors of this paper argue that the notion "global view" relied upon by a commonly accepted (W3C) definition of choreographies lacks a clear definition. More precisely, it is implicitly defined in ad-hoc manners, based on the operational semantics of the underlying modelling formalisms used for the definition of local views. This state of things poses a number of problems that the authors also discuss in the second section of the paper. They propose to address this problem by explicitly defining the notion of global choreography independently

from that of the local view.

The proposed definition of global choreography relies on the primitive notion of interaction, A --m--> B, whereby participant A sends message m to participant B. Interactions are used to build choreographies using three operators: sequential and parallel composition, and choice.

The precise message-passing mechanism is intentionally left unspecified.

The semantics of such global choreographies is defined in terms of hypergraphs encoding Lamport-like happens-before relation on events of the form AB!m and AB?m representing sending and reception of messages. Hypergraphs, as opposed to graphs, are necessary to distinguish between choice and parallel composition: in the former case only one branch is executed, as opposed to all branches, in the latter case. (Other formalisms such as AND-OR graphs could be also be used.) A compliant local-view implementation of a global choreography must respect these constraints.

The key difficulties in the above approach are 1) capturing the happens-before dependencies required by sequential composition of choreographies involving unrelated participants (such dependencies do not naturally appear in the Lamport formulation); 2) capturing the fact the all participants must be aware of the choices made in the global choreography to ensure consistency of execution and deadlock-freedom. The first difficulty is addressed in the construction of semantic hypergraphs. The second is addressed by an additional "well-branched" criterion. This criterion is novel and, according to the authors, provides a unifying generalisation of several other criteria used in the choreography literature.

The authors conclude the paper by considering Communicating Finite State Machines (CFSM) as an example of a local view model. They prove that an implementation, obtained by projecting global choreographies to obtain CFSM, is safe (only admits behaviours that are compliant to the global choreography) and deadlock-free, provided that the semantic

hypergraph is defined and well-branched.

Although the paper could benefit from some restructuring and simplification as discussed on the ICE forum, the presented material is clear, conceptually interesting, and, to the best of this

reviewer's limited knowledge, novel. It will contribute to an interesting discussion at the workshop.

Overview

This paper introduces a novel formal model by merging together Cellular Automata (CAs) and Nominal Sets. Cellular automata are state machines, whose state space is given by an array of values on a fixed -- finite -- alphabet and the transition function describes the evolution of an arbitrary array's cell based on the values of the cell itself and its neighbors. In Nominal Cellular Automata (NCAs), the alphabet is infinite and the transition function is equivariant (namely a morphism in the category Nom).

The paper analyses the relationship between NCAs and Elementary Cellular Automata (ECAs) which are CA whose array has dimension one and diameter (i.e., the size of the neighborhood) 3. In the nominal setting, elementary automata are less than the elementary classical one and, only 8 of these (displayed in Figure 4) can be "directly simulated" by Elementary Nominal Cellular Automata. However, when considering NCAs with dimension 1 and diameter 12, each ECA can be effectively simulated. Among these there is one which is Turing Universal (ECA 110) which can actually be simulated by a NCA with diameter 9.

The paper concludes with some nice speculations about the physical meaning of names in terms of particles and trajectories.

Evaluation

I have found this paper extremely cute and interesting.There are no theorems proved, but a lot of ideas which can be pushed forward. I am sure that the presentation of this work to ICE will be an excellent contribution for the workshop.

Overview

The authors introduce and study a variant of cellular automata, where the colours form a nominal set. The computation of next cells is defined by equivariant functions, meaning that they can only be based on equality between the previous cells. Several aspects of these new machines are studied, in particular numbering schemes, simulations of classical automata (using big enough contexts, but not crucially relying on fresh name creation) and emergent behaviour. Several results and constructions have been implemented, and, based on a specific implementation of fresh name creation, the behaviour is presented in terms of a range of pictures.

Evaluation

The paper is written very well, at the right level of detail, and accessible to a large audience, requiring no previous knowledge of nominal sets. There are some concrete results as well as an explorative investigation of emergent behaviour, and there is plenty of discussion and reflection. From reading the introduction, I felt that the choice of combining nominal automata with cellular automa seemed a bit arbitrary, but the paper shows that this can be interesting and it raises plenty of questions. All in all, the paper is a real pleasure to read, and I am sure it has the potential for an excellent talk and discussion at the conference.

The authors describe nominal cellular automata (NCA), and in particular ENCA (E - elementary) following ECA. Without going deeply into the theory of nominal sets and formal definitions, the idea is that (E)NCA are like (E)CA with the possibility of having infinitely many values (names) at cells, but with "local" transitions based on simple equality patterns: a transition may change the value of a cell depending on the (pre-defined finite) context of the cell, and the new value is either a name already appearing in the context or a fresh new name.

The authors provide simple but interesting combinatorial discussion of (E)NCA, describe which ECA can be directly simulated by ENCA, and show that every ECA can be simulated by a diameter-12 NCA, leading also to the result of the existence of Turing-universal NCA.

Towards the end of the paper, the authors discuss particles (and other emergent behaviours) in NCA. This section is only setting down the discussion topics, does not provide a full study.

The paper is very interesting, it is unusual, and in my opinion provides a fresh contribution to ICE 2016. I would like to see more motivation-discussion in the final version though, and inclusion of the answers to some reviewer's questions posed in the forum.