Tverberg’s theorem is a result from discrete geometry, which states that, in any d-dimensional vector space for any set of (k-1)(d+1)+1 points in that vector space, the set can be partitioned into k disjoint subsets, the convex hulls of which are intersecting. The paper at hand generalizes this theorem by stating that, when considering a larger set of points (r + 1 additional points), a partition of k subsets can be found such that removing any given set of r points from each subset of the partition still yields the property that the convex hulls of the remaining sets are intersecting.
The paper starts with a short introduction to Tverberg’s theorem, its mathematical history, and previously discovered generalizations. In the following section, the authors present some mathematical definitions, notations, and the principal lemma used for proving the main theorem. The third section contains the proof of the generalized theorem and concludes with a conjecture that the number of points in the theorem is tight, that is, that taking even one more point away will cause the theorem to not work. The last section is a remark on the proof technique, illustrating that the lemma of the second section is indeed necessary because an extension of the Bárány-Lováz theorem, usually used in the proof, would not work.
This very short paper would be understandable to most readers only with additional background reading. It would have been nice to have some examples illustrating the content of the theorem for the two-dimensional case. Furthermore, several other theorems are referenced but not described in the paper. This may be sufficient for a specialist in the field with ready access to the relevant literature, but leaves less specialized readers without enough information. From a computer science perspective, it would also be nice to know how to compute the respective partitions from a set of points. It is very hard to extract the algorithm from the proofs.