Bỏ qua nội dung

To start, let us introduce Steiner (1796-1863)  system with parameters $(n, r, t)$. There is a set of $n$ elements, and one would like to find a collection $F$ of subsets of size $r$ so that every $t$-tuple of elements appear in exactly one member of the collection. Since each $r$-tuple contains ${r \choose t}$ $t$-tuples, it is clear that ${r \choose t}$ must divide ${n \choose t}$. More generally, for any $0 \le i \le t$, ${r -i \choose t-i}$ must divide ${n -i \choose t-i}$ (exercise). Is this necessary divisibility condition sufficient ?

A famous related problem was posted by Kirkman (1850)  known as the “15 school girls problem”: There is a class of 15 students in a private girl school. The girls want to walk in groups of three every day of the week, but in a way so that every two people walk with each other exactly once. Can you form a schedule for them ? (First exercise: see that they need the full week, not five days, to do this.) The extra requirement here is that the Steiner system is required to decompose nicely as well.

For certain sets of parameters, one can construct Steiner systems using algebraic structure. For instance, a projective plane over a finite field $late x F_p$ is a Steiner system of order $2$, with $r= q+1$ and $n=q^2 +q+1$, since every two points determine exactly one line. Many others come from Mathieu groups. The existence of Steiner systems with given fixed $r,t$ and sufficiently large $n$ satisfying the divisibility condition was a famous open problem for a long time.

A design is a generalization of Steiner systems, where one introduces a new parameter $\lambda$ and replacing the assumption that every $t$-tuple of elements appear in exactly one member of the collection by a new assumption that every $t$-tuple of elements appear in exactly $\lambda$ members of the collection. Construction of designs was a popular topic at some point, partially thanks to motivation from statistics (such as testing).  There is an endless list of very clever constructions of designs for specific parameters. However, the general existence has not been known. An important case was famously solved  by Wilson in early 1970s, who positively answered the question for the case $t=2$.

Last Friday, Peter  Keevash, a young researcher in extremal and probabilistic combinatorics at Oxford, announced the complete solution for the general existence conjecture at a meeting in Oberwolfach (Germany).  It was the highlight of the conference. For some reason, his talk was scheduled to be a short talk (25′). Since the introduction/history/statement of result (which is actually is a more general statement about hyper graphs and complexes) already took more than 20′, naturally little could be said about the proof. I managed to catch him after the talk and asked a few more details. Here is my rough understanding of his proof. The paper is not yet online but will be out very soon.

For simplicity, let us consider the case $\lambda=1$ again (Steiner systems). First, we formalize the problem using hypergraph terminology. Consider a hyper graph $H$ whose vertices are  the $t$-tuples of a set $X$ of $n$ elements (thus $H$ has ${ n \choose t}$ vertices). The edges have size ${r \choose t}$; a collection of ${r \choose t}$ $t$-tuples form an edge if the $t$ tuples are exactly the $t$ tuples of a set of $r$ elements in $X$.

A matching in a hypergraph is a family of disjoint edges. A matching is perfect if it covers all vertices (it is clear that a divisibility condition needs to be satisfied for a perfect matching to exist). Furthermore, it is near perfect if it covers all but $o(1)$ fraction of the vertices.

The existence of a Steiner system is equivalent to the existence of a perfect matching in $H$. The problem here is that it is very hard to find a sufficient condition for the existence of perfect matchings in a hypergraph (for graphs the problem has been solved completely).

The probabilistic method comes to the rescue. One can try to create a matching using the natural greedy randomized algorithm as follows. Pick a random edge, delete all edges intersecting it, repeat. This approach was first used by Rodl in the 1980s to prove the existence of a near matching (solving a problem of Hanani). In his proof, Rodl picked several random edges at the same time (thus, the method is dubbed as “Rodl’s nibbles”) and delete all edges intersecting any of them. (If we do this carefully, the edges chosen are already disjoint with high probability.) What can be showed is that at the end of each round (nibble), one can still control the structure of the hypergraph, in particular the degree of each vertex, the codegree of any two vertices, and later, the higher codegree of any set of at most $r$ vertices (the codgree of a set $U$ is the number of edges containing $U$).  The use of random greedy algorithms in this manner was initiated by Ajtai, Komlos and Szemeredi few years earlier. (Later,  in 1995, Spencer showed that one can still analyze the algorithm which picks one random edge at a time.)

The main problem with this approach is that near the end, our control on the hypergraph is very poor. There are papers subsequent to Rodl’s by Kostochka and Rodl, Grable, Alon, Kim and Spencer,  and myself (see this for a brief survey) which analyzed the process more carefully and got polynomial error terms for $o(1)$. However, once the number of remaining vertices (not covered by the matching produced by the randomized algorithm) becomes very small, the hypergraph induced by them (which consists of the edges surviving all deletions so far) loses all structure and it becomes impossible to analyze the algorithm any further.

Peter found a beautiful way to get around this problem. The intuition is roughly as follows. We do not start the algorithm immediately, but first create an auxiliary hypergraph $H'$ on a subset of the vertices, say $V' \subset V$, which has very nice structure, with many perfect matchings on $V'$. This hypergraph will be used at the end to negate the loss when the randomized algorithm starts to falter. What happens with the algorithm at the end is that one cannot guarantee the edges chosen  are disjoint any more. There will be several vertices which are covered by, say, 2 edges. Now, we turn back to the auxiliary hypergraph created at the beginning to switch out these edges using the disjoint edges of $H'$. In spirit, this is somewhat  close to the absorbing method introduced by Rucinski, Rodl and Szemeredi.

This is, of course, an hugely over simplified description which does not do justice to the real argument. Peter needs to consider a weighted version of the problem and construct a matrix instead of the (auxiliary) hypergraph, and run a delicate induction on complexes rather than on hypergraphs. I hope to have a more proper description after the paper is out. (Many thanks to Peter Keevash for useful comments.)

From → Không phân loại

6 phản hồi
1. Does Keevash’ construction provide an efficient algorithm?

• It probably does, just not sure it is efficient.

2. If this only works for sufficiently large n, does that mean we still don’t know if there are designs where r is close to n? Such as (n, (1-eps)n, t)?