Michael Amir, Alfred M. Bruckstein
We consider the problem of completely covering an unknown discrete environment with a swarm of asynchronous, frequently-crashing autonomous mobile robots. We represent the environment by a discrete graph, and task the robots with occupying every vertex and with constructing an implicit distributed spanning tree of the graph. The robotic agents activate independently at random exponential waiting times of mean $1$ and enter the graph environment over time from a source location. They grow the environment's coverage by `settling' at empty locations and aiding other robots' navigation from these locations. The robots are identical and make decisions driven by the same simple and local rule of behaviour. The local rule is based only on the presence of neighbouring robots, and on whether a settled robot points to the current location. Whenever a robot moves, it may crash and disappear from the environment. Each vertex in the environment has limited physical space, so robots frequently obstruct each other. Our goal is to show that even under conditions of asynchronicity, frequent crashing, and limited physical space, the simple mobile robots complete their mission almost surely in linear time, and time to completion degrades gracefully with the frequency of the crashes. Our model and analysis are based on the well-studied ``totally asymmetric simple exclusion process'' in statistical mechanics.
Start Time | End Time | |
---|---|---|
07/14 15:00 UTC | 07/14 17:00 UTC |
The paper presents an interesting algorithm with a substantial analysis. However, I do have some suggestions for improvement. 1. The simulation results could be improved. The results illustrate the claims of the algorithm's performance, but they are as abstracted as the underlying problem formulation, thus not adding appreciably to the strength of the paper. The simulation results would be more effective if they were made to reflect real robots in real environments, attempting to bridge the apparent abstraction-reality gap of the problem formulation. This could be done with open source robot simulators such as WeBots or ROS-Gazebo. 2. The simulation results do not compare against other algorithms. The authors admit this fact, and excuse themselves given that no other algorithm seems to operate under the same assumptions as theirs. However, it would still be valuable to see the performance of other algorithms for qualitative reasons, even if the underlying assumptions are different. 3. The substance of the paper is the performance analysis of the algorithm. The authors do a good job of trying to make the analysis digestible and giving intuition for non-experts in this area. However, it was not clear to this reviewer why a simpler analysis would not suffice. E.g., if the timing randomness and robot failures were removed, and robots were instead ordered to act one at a time in a discrete time deterministic sense according to the proposed algorithm, it would seem that similar results would be straightforward to obtain. Then introducing random robot failures on edge traversals would also be relatively straightforward to analyze. Why then does the random timing present such a challenge? Some higher level intuition about why this analysis is challenging, and how it becomes simpler under different assumption, would be helpful.
This paper presents a method for covering an unknown graph with crash-prone robots and no centralized control. This is an interesting problem that has not been treated in the literature. The authors claim that it works on arbitrary graphs, and that it requires basic, indirect (implicit) communication between robots. I do believe this paper has a decent contribution but the authors would do well to rethink how the work is presented. While I believe this problem and solution have some significance, I was left with many questions about the applicability of this approach, perhaps due to the omission of a list of assumptions from the authors. Furthermore, I have some doubts about the claim that "there are no restrictions on G as long as its connected". Regarding the graph G, It seems from reading the work that G must have a specific structure, perhaps evenly spaced nodes such as on a grid, and that robots must be aware of this. Otherwise it is impossible for robots that do not know the graph to find a "location" to fill. I have my doubts as to whether this works on a generic graph, where edges vary in length. Perhaps if the authors instead claim that there are no restrictions on the environment other than connectivity, this might make more sense. But it would clearly work best in rectilinear environments. It's actually not till the end of the paper (in the conclusion!) where the authors finally state that the environment is discrete. Stating in the abstract that the environment is represented by a discrete graph and saying that the environment is discrete are two VERY different things. Furthermore, leaving that information out of the body of the paper until the conclusion makes for very difficult reading. Regardless, a discrete environment doesn't technically mean much, as it does not imply that the environment is divided perfectly into evenly-sized square cells. The robots also must have some type of sensing onboard to sense the environment and the distance between themselves and other robots. Unless, of course, they are moving blindly and may crash into obstacles, but it seems that the crashing the authors intended had more to do with instability or technical issues instead of crashing into obstacles. The authors should clarify what they mean by "destructive mechanical faults" on pg 1. Other questions also arise regarding "marking" or "pointing to" other robots. What technology is this modeling? How would robots mark each other, and how do robots make decisions about which direction to move in once they have found a settled robot. How do they know there is an adjacent node that is available? This question kept arising throughout the paper, even in Algorithm 1, where it states that "if a neighbor u of v contains no robots then attempt to move to u" It's also unclear to me how the robots are activated at random exponential waiting times of mean 1. I first took this to mean that one robot is activated randomly at waiting times of mean 1, and that once its activated it does not go out of activation unless it crashes. Then I thought that the robots can be activated on and off throughout the execution. I'm not exactly sure which it is. Unfortunately all of these doubts lingered as I read through the paper. The simulation and evaluation left some helpful information out as well. First, there is no discussion of the "robot simulator." This is not so important as this is more theoretical work, but a little info would be helpful. The authors also state that the results were averaged "over a number of simulations". What is this number? 2, 20, 200? This makes a very big difference in the credibility of your comparisons in Table II. I do believe this paper has a good contribution, but it needs to be rewritten. The authors would do well to consider what assumptions have been made and how best to present them, and what technology one could actually use to "mark" locations in a real robot system.
The paper is generally easy to read and its technical content is illustrated clearly. The theoretical analysis appears to be sound. The achieved results are significant and contributes to advance the state of the art, according to what stated by authors in Section I. Personally, I am not convinced about the practical relevance of the proposed approach. Authors quickly mention that their “model in fact captures many of the relevant traffic phenomena that will occur in real life implementations”, but no other information is provided to assess how and why the proposed model could be useful in practice. Do authors expect to implement robots behaving according to Algorithm 1 or do authors intend to apply their results in order to improve current coverage approaches (in this case, which approaches? how could they be improved?). The presentation is not linear and suffers of several repetitions between Sections I and II. Section I is a sort of extended abstract that discusses not only context and motivation for the work, but also several details about assumptions and results (one example among many: constant c is introduced in Section I and never mentioned in Section II). It would be better for authors to significantly shorten Section I, moving most of the content to Section II (and merging it with what is already there). It would be useful to show a representation of G(t) for the first and second stage of the example reported in Fig. 1. Fig. 2 is not referenced in text. The discussion on multiple source vertices (Section III) seem to imply that the different trees are build independently from different sources. However, such trees will touch at some vertices. How are interactions between agents building different trees managed in these situations? Extra space in caption of Table II.