Lirui Wang, Yu Xiang, Dieter Fox
In robot manipulation, planning the motion of a robot manipulator to grasp an object is a fundamental problem. A manipulation planner needs to generate a trajectory of the manipulator to avoid obstacles in the environment and plan an end-effector pose for grasping. While trajectory planning and grasp planning are often tackled separately, how to efficiently integrate the two planning problems remains a challenge. In this work, we present a novel method for joint motion and grasp planning. Our method integrates manipulation trajectory optimization with online grasp synthesis and selection, where we apply online learning techniques to select goal configurations for grasping, and introduce a new grasp synthesis algorithm to generate grasps online. We evaluate our planning approach and demonstrate that our method generates robust and efficient motion plans for grasping in cluttered scenes.
Start Time | End Time | |
---|---|---|
07/14 15:00 UTC | 07/14 17:00 UTC |
Overall I think the paper is on its way to being a good contribution but there are still a few things to be addressed: I - Presentation and approach - Related work: Some super relevant work seems to be missing including [a]. In the intro two categories of grasping are mentioned but the seconds one does not cite anything. On the other hand there is a discussion in related work on TAMP that I don't think is particularly relevant to problem here. - The primary difficulty I have with reviewing this paper is the confusion in the overloaded use of 'online'. Throughout the paper it is often ambiguous as to what aspect of the algorithm and its evaluation happen online vs offline. For the rest of this review I am going to assume the following based on what I could gather from multiple reads of the paper: the entire approach of optimizing the trajectory and selecting the grasp happens offline given a new scene and a problem, and once a plan is found it is executed open loop without any replanning. Given this 'online' from the title should also be dropped. - This ambiguity could be resolved by stepping through the approach using an example problem. For instance in Fig. 2. at various steps of the algorithm the robot appears to be in different places. If an offline trajectory is being solved for discussing in the context of the whole trajectory would make sense. Is the loop in Fig. 2 an iteration of the offline algorithm or is it making online execution steps by giving feedback. - In Eq. (8) g is refereed to as a goal configuration at ith iteration. If only one goal is used isn't this just now vanilla CHOMP and not goal-set CHOMP? - Figures: Fig. 1 is not very clear to see what is happening. Maybe just show a few waypoint with the robot and add the trajectory taken by the end effector. In Fig. 6 is none of green or red stuff mentioned is clearly visible from those viewpoints. II - Evaluation - Overall the evaluation is unable to highlight the strengths of the approach. Comparing Table I and III it does not seem like the grasp synthesis part adds much improvement. Then, the critical part of the approach is just goal-set CHOMP plus grasp selection from the goal set with something like mirror descent. This makes the contribution seem incremental and weak at best. - Parameter tuning and effect on the Algorithm: The ablation for \lambda while appreciated is less useful since this parameter has been explored in CHOMP. Discussion on \gamma, \alpha, \beta would be more helpful. - How are body points uniformly sampled on the robot surface? - Is the 'Execution' success a percentage out of the ones that were first successful in 'Planning'? - Report std on the results would be useful. III - Limitations A limitations section could be included to possibly discuss the following: - The trajectory tail approximated with linear interpolation: how accurate is this? what if the trajectory is in collision or violates other constraints? - For a novel application how many grasps in G are necessary and how does the performance scale with the size of G? - How is fighting between competing objective resolved? For example, reaching object vs obstacle avoidance to another object close to it. - How would the simulator performance translate to the real world? [a] Berenson, D., Srinivasa, S., & Kuffner, J. (2011). Task space regions: A framework for pose-constrained manipulation planning. The International Journal of Robotics Research, 30(12), 1435-1460.
The summary is above. The paper is well written and clear. Several variations of the algorithms are presented and evaluated. I liked that it acknowledges the presence of local minima in grasp refinement (ISF) and therefore the requirement to choose discretely among targets. I have a few small suggestions. - In a number of cases, the paper and video uses "optimal" when "locally optimal" or "best among our options" should be used instead. - There is no discussion about run-times (except at one point suggesting that MD works better but it's slower). - Somewhere it should be acknowledged that these beautiful motions probably won't work in the presence of sensing uncertainty. - If one imagines a depth-sensor on the end-effector, one could use the on-line learning to refine the grasp in the presence of uncertainty. But, in that case, one would want to consider information gathering as part of the process. Possible future work...