This site has been migrated to mhabic.github.io. Please go there for updated information etc.


Characterising forcing extensions

Last week at the CUNY set theory seminar I presented a proof of an old theorem of Bukovský, which characterises those pairs of models which are set-generic extensions. It turns out that these are precisely those pairs which satisfy a familiar covering property. This result, which seems to have been forgotten for a while, has gathered attention recently, particularly (as far as many of the people at CUNY are concerned) due to its use by Usuba in his work on set-theoretic geology and proving the downward directed grounds hypothesis. In my presentation I didn’t quite get to lay out everything as neatly as I would have liked, so I am writing this post in the hope of giving a fuller account.

As mentioned, the key property that Bukovský isolated is a kind of covering property between a pair of models. We may focus on the case of the relationship between the universe {V} and an inner model {M\subseteq V}; for us an inner model will be a transitive proper class model of ZFC. We work throughout in GBC (we can even dispense with class choice), so, in particular, there is no need for inner models to be definable.

Definition 1 Let {M\subseteq V} be an inner model and let {\kappa} be regular in {M}. We say that {M} uniformly {\kappa}-covers {V} if for any function {f\colon \alpha\rightarrow M} with {\alpha} an ordinal and {f\in V} there is another function {F\colon \alpha\rightarrow M} in {M} such that {f(\xi)\in F(\xi)} and {|F(\xi)|^M<\kappa} for all {\xi<\alpha}.

That is to say, {F\in M} covers the function {f\in V}, giving fewer than {\kappa} many guesses at each coordinate. This property is well-known to all students of forcing and all the introductory texts that I know present the following key fact.

Theorem 2 Let {\mathop{\mathbb P}} be a {\kappa}-cc poset and let {G\subseteq\mathop{\mathbb P}} be generic over {V}. Then {V} uniformly {\kappa}-covers {V[G]}.

Proof: Fix a function {f\colon \alpha\rightarrow V} in {V[G]} and a name {\dot{f}} for it. There is some {\gamma} so that it is forced that {\dot{f}} maps into {V_\gamma}. We can thus find, for each {\xi<\alpha}, a maximal antichain {Z_\xi\subseteq\mathop{\mathbb P}} deciding the values of {\dot{f}(\xi)}. Now we just let {F(\xi)} be the set of the possible values given by the conditions in {Z_\xi}. \Box

Bukovský’s remarkable result is that the uniform covering property precisely characterises the set-forcing extensions.

Theorem 3 (Bukovský, 1973) Let M\subseteq V be an inner model and let {\kappa} be regular in {M}. The following are equivalent:

  1. {V} is a {\kappa}-cc generic extension of {M}; that is, there are a poset {\mathop{\mathbb P}\in M} and a filter {G\in V} such that {\mathop{\mathbb P}} is {\kappa}-cc in {M} and {G\subseteq\mathop{\mathbb P}} is generic over {M} and {V=M[G]}.
  2. {M} uniformly {\kappa}-covers {V}.

We already saw the forward implication as theorem 2. The majority of the work, not surprisingly, goes into proving the converse. Starting from the hypothesis that {M} uniformly {\kappa}-covers {V}, the argument can be divided into three fairly self-contained steps:

  1. If {A\in V} is a set of ordinals (or even {A\subseteq M}) then {M(A)} is a {\kappa}-cc generic extension of {M}. Here {M(A)} is the least transitive model of ZFC extending {M} and containing {A}.
  2. There is a set of ordinals {A\in V} such that {M(A)} is a terminal {\kappa}-cc generic extension of {M}; that is, there is no inner model {N} such that {M(A)\subset N\subseteq V} and {M(A)\subseteq N} is a {\kappa}-cc generic extension.
  3. If {M(A)} is the terminal extension from the previous step, then {M(A)=V}.

As it turns out, step 1 is the hardest to prove. Therefore we shall make the steps in reverse, starting from 3 and working our way back to 1.

Step 3 follows quite easily from the other two. Specifically, assume that {M(A)\neq V}. Since all of our models satisfy choice, there must be a set of ordinals {B\in V\setminus M(A)}. If we knew that {B} was {\kappa}-cc generic over {M(A)}, this would contradict our assumption on the maximality of {M(A)}.

Lemma 4 Suppose {M} uniformly {\kappa}-covers {V} and {M\subseteq N\subseteq V} and {\kappa} remains regular in {N}. Then {N} uniformly {\kappa}-covers {V}.

Proof: Fix a function {f\colon \alpha\rightarrow N} in {V}. There is an ordinal {\gamma} such that the range of {f} is contained in {V_\gamma^N}. Let us pick an injection {\psi\colon V_\gamma^N\rightarrow \mathrm{Ord}} in {N}. Since {M} uniformly {\kappa}-covers {V} we can find a covering function {F\colon \alpha\rightarrow \mathrm{Ord}} for {\psi\colon f} in {M}. But then the function {\xi\mapsto \psi^{-1}[F(\xi)]} is a covering function for {f} in {N}. \Box

Returning to the argument from before, we can now apply step 1 to the pair {M(A)\subseteq V} to conclude that {B} really is {\kappa}-cc generic over {M(A)}, giving us the contradiction and completing step 3.

The key realisation for step 2 is the following lemma:

Lemma 5 Let {\mathop{\mathbb P}} be a separative {\kappa}-cc poset. Then {\mathop{\mathbb P}} adds a subset of {\kappa}.

Proof: It shall suffice to prove that {\mathop{\mathbb P}} is not {\leq\kappa}-distributive. This is because, if {\mathop{\mathbb P}} adds a function {f\colon\kappa\rightarrow\mathrm{Ord}}, then the range of {f} can be covered by a set of size {\kappa} in {V}, by theorem 2. Modulo some coding, the function {f} is then determined by a subset of this covering set.

So now assume toward a contradiction that {\mathop{\mathbb P}} is {\leq\kappa}-distributive. This means that any family of at most {\kappa} many maximal antichains in {\mathop{\mathbb P}} has a common refinement. This allows us to build a tree {T} in {\mathop{\mathbb P}} as follows:

  • the root of {T} is the top condition of {\mathop{\mathbb P}};
  • every node in {T} has at least two immediate successors;
  • every level of {T} is a maximal antichain in {\mathop{\mathbb P}}.

We can build {T} inductively; the successor steps are easy and we use distributivity to pass through limit steps. Specifically, {\leq\kappa}-distributivity allows us to build the tree {T} up to height (at least) {\kappa+1}. But now take any condition {p} from the {\kappa}-th level of {T} and consider the branch it determines through {T}. Since every node on the branch is splitting, any one-off-the-branch antichain determined by this branch will have size {\kappa}. But this of course contradicts the {\kappa}-cc. \Box

Seen differently, the proof basically shows that a {\kappa}-cc poset cannot be {(\kappa,\kappa)}-distributive.

Let {A\in V} be a set of ordinals coding {\mathcal{P}(\kappa)}, so that {\mathcal{P}(\kappa)^V\in M(A)}. We claim that {M(A)} is the required terminal {\kappa}-cc extension of {M}. The set {A} is definitely {\kappa}-cc generic over {M} by step 1. On the other hand, if {M(A)} were not terminal, there would be a further {\kappa}-cc generic extension {M(A)\subseteq N\subseteq V} and, by lemma 5, there must be a new subset of {\kappa} in {N}. But {M(A)} contains all the subsets of {\kappa} in {V} by construction. This completes step 2.

In order to deal with step 1 with some elegance we need to introduce some terminology.

Definition 6 Let {\kappa,\lambda} be cardinals. We shall denote by {{\mathbb B}(\kappa,\lambda)} the free {\kappa}-complete Boolean algebra on the generators {e_\alpha} for {\alpha<\lambda}.

By {\kappa}-complete we mean that suprema of sets of size less than {\kappa} exist. Freeness can be interpreted in two ways. We can think of it in terms of a universal property: any map of the generators into a {\kappa}-complete Boolean algebra extends to a {\kappa}-complete homomorphism on the whole {{\mathbb B}(\kappa,\lambda)}. Alternatively, we can think of {{\mathbb B}(\kappa,\lambda)} as being built in stages, starting from the generators and at each stage taking complements and size {<\kappa} suprema of what we had constructed before, and modding out by the minimal obvious relations. This latter viewpoint suggest yet another one. We can also view the generators {e_\alpha} for {\alpha<\lambda} as representing the propositional formulas {\alpha\in \dot{A}}, where {\dot{A}} is a predicate, and {{\mathbb B}(\kappa,\lambda)} being the Lindenbaum algebra of infinitary formulas built from these atomic formulas via negations and size {<\kappa} disjunctions. In my talk I suggested that {{\mathbb B}(\kappa,\lambda)} could, for intuitive purposes, be replaced by {\mathrm{Add}(\kappa,\lambda)}, but this is misleading, as I will discuss after theorem 7.

Note that, if {\kappa<\kappa'}, then {{\mathbb B}(\kappa,\lambda)} embeds as a {\kappa}-complete subalgebra into {{\mathbb B}(\kappa',\lambda)}. We will henceforth see these algebras as nested. There is also another kind of nesting.

Theorem 7 Let {M} be an inner model. Then {{\mathbb B}(\kappa,\lambda)^M} embeds into {{\mathbb B}(\kappa,\lambda)^V} as an {M}{\kappa}-complete subalgebra; that is, there is a homomorphism {f\colon {\mathbb B}(\kappa,\lambda)^M\rightarrow {\mathbb B}(\kappa,\lambda)^V} such that for any set {X\subseteq {\mathbb B}(\kappa,\lambda)^M} in {M} of size {|X|^M<\kappa} we have {f(\bigvee X)=\bigvee_{x\in X}f(x)}. In fact, {f} can be taken to extend the identity map on the generators.

In particular, the theorem implies that {f} maps small maximal antichains of {{\mathbb B}(\kappa,\lambda)^M} into maximal antichains in {{\mathbb B}(\kappa,\lambda)^V}. I will not give a proof of this result, but I want to point out a subtlety in the statement, which caused a lot of issues in my talk. It may seem that one can give a simple counterexample to the theorem. The one brought up in my talk was as follows: supposing that {\kappa} and {\lambda} were sufficiently large, let {u_r}, for some real {r\in M}, be the conjunction of the generators or their complements according to {r}. Then it should be the case that {\bigvee_r u_r=1} in {{\mathbb B}(\kappa,\lambda)^M}. But if {V} has more reals than {M}, then {\bigvee_{r\in M} f(u_r)\neq 1} in {{\mathbb B}(\kappa,\lambda)^V}, contradicting the theorem. The subtlety arises in the claim that {\bigvee_r u_r=1} in {{\mathbb B}(\kappa,\lambda)^M}. This would be the case if {{\mathbb B}(\kappa,\lambda)^M} were {(\omega,2)}-distributive, since then {\bigvee_r u_r=\bigwedge_n(e_n\lor\lnot e_n)=1}, but in general this will fail. In fact, by freeness, there is a {u\in {\mathbb B}(\kappa,\lambda)^M} which is compatible with every {e_n} and their complements but incompatible with every {u_r}.

We now know that {{\mathbb B}(\kappa,\lambda)^M} is essentially a {M}{\kappa}-complete subalgebra of {{\mathbb B}(\kappa,\lambda)^V}. This will allow us to pull down information from the algebra in {V} to the algebra in {M}. The foremost is the following lemma.

Lemma 8 Subsets of {\lambda} in {V} correspond uniquely to {M}{\kappa}-complete ultrafilters on {{\mathbb B}(\kappa,\lambda)^M}.

Proof: Given an ultrafilter {U}, we can simply assign to it the set {A=\{\alpha\in\lambda; e_\alpha\in U\}}. Conversely, given a set {A\subseteq\lambda} in {V} there is, by freeness, a unique {\kappa}-complete ultrafilter {U} on {{\mathbb B}(\kappa,\lambda)^V} which contains or omits the generators according to {A}. But then {U\cap {\mathbb B}(\kappa,\lambda)^M} is an {M}{\kappa}-complete ultrafilter on {{\mathbb B}(\kappa,\lambda)^M}, since the latter is an {M}{\kappa}-complete subalgebra of {{\mathbb B}(\kappa,\lambda)^V}. It is easy to check that these two assignments are inverse to each other. \Box

Given {A\subseteq\lambda} we will denote by {U_A^\kappa} the corresponding ultrafilter on {{\mathbb B}(\kappa,\lambda)^M}. Clearly {M(U_A^\kappa)=M(A)}, but there is no reason to believe that {U_A^\kappa} is in any way generic over {M}. In fact, {{\mathbb B}(\kappa,\lambda)^M} will not be {\kappa}-cc, so we have not quite finished with step 1 of theorem 2. We will use the uniform covering property to thin out {{\mathbb B}(\kappa,\lambda)^M} to a {\kappa}-cc poset which the residue of {U_A^\kappa} will be generic for.

Let {f_A} be the function that, given a maximal antichain {Z\subseteq {\mathbb B}(\kappa,\lambda)^M} in {M}, gives the element of the intersection {Z\cap U_A^\kappa}, if such an element exists, and gives some arbitrary element of {Z} if not. This function will, in general, only exist in {V}. But since {M} uniformly {\kappa}-covers {V}, we can find a covering function {F_A} for {f_A} in {M}; we may also assume that {F_A(Z)\subseteq Z} for all {Z}. Essentially, the function {F_A} represents {M} trying to guess where the potential generic filter coding {A} will meet the antichains of {{\mathbb B}(\kappa,\lambda)^M}. We will use this guess to remove the inessential parts of the antichain {Z}.

We work for a while in {M}. Let {\theta} be a sufficiently large cardinal (much larger than {|{\mathbb B}(\kappa,\lambda)|}). Define a subset of {{\mathbb B}(\theta,\lambda)} by

\displaystyle T_A=\left\{\bigvee Z\implies \bigvee F_A(Z); Z\subseteq {\mathbb B}(\kappa,\lambda)\text{ a maximal antichain}\right\}\subseteq {\mathbb B}(\theta,\lambda)

It should be noted that, while {\bigvee Z=1} in {{\mathbb B}(\kappa,\lambda)}, it need not be the case that {\bigvee Z=1} in {{\mathbb B}(\theta,\lambda)}.

We wish to see that {\bigwedge T_A\neq 0} in {{\mathbb B}(\theta,\lambda)}. This will follow if we can show that {(\bigvee Z\implies \bigvee F_A(Z))\in U_A^\theta} for all {Z}, by the {M}{\theta}-completeness of {U_A^\theta}. So assume that {\bigvee Z\in U_A^\theta}. By the completeness of {U_A^\theta}, we can find a {u\in Z\cap U_A^\theta}. But note that also {u\in Z\cap U_A^\theta\cap {\mathbb B}(\kappa,\lambda)= Z\cap U_A^\kappa}, so {f_A(Z)=u}. Therefore {u\leq \bigvee F_A(Z)\in U_A^\theta}.

We are ready to define the poset which will add the set {A} over the model {M}. Let

\displaystyle \mathop{\mathbb P}_A=\left\{u\in {\mathbb B}(\kappa,\lambda);0<u\leq \bigwedge T_A \text{ in } {\mathbb B}(\theta,\lambda)\right\}

In the Lindenbaum algebra view of {{\mathbb B}(\kappa,\lambda)}, the poset {\mathop{\mathbb P}_A} is essentially the Lindenbaum algebra where we add the infinitary inference rules coming from {T_A}; that is, from {\bigvee Z} we are allowed to infer {\bigvee F_A(Z)}.

Lemma 9 {\mathop{\mathbb P}_A} is {\kappa}-cc in {M}.

Proof: Work in {M} and let {Z\subseteq \mathop{\mathbb P}_A} be a maximal antichain. Then {F_A(Z)} is a subset of {Z} of size {<\kappa}. We shall show that any element of {Z} is compatible with some element of {F_A(Z)}. It will follow that {Z=F_A(Z)} has size {<\kappa}.

So pick {u\in Z} and fix some {\theta}-complete ultrafilter {U} on {{\mathbb B}(\theta,\lambda)} with {u\in U}. Then also {\bigwedge T_A\in U}, so, in particular, {(\bigvee Z\implies \bigvee F_A(Z))\in U}. Since {u\leq \bigvee Z\in U} we also have {\bigvee F_A(Z)\in U} and, by {\theta}-completeness, there is some {u'\in F_A(Z)\cap U}. But then {0<u\wedge u'\leq \bigwedge T_A}, meaning that {u} is compatible with {u'} in {\mathop{\mathbb P}_A}. \Box

Lemma 10 The filter {G_A= U_A^\kappa\cap \mathop{\mathbb P}_A} is {\mathop{\mathbb P}_A}-generic over {M}.

Proof: Let {Z\subseteq \mathop{\mathbb P}_A} be a maximal antichain in {M}. Then {|Z|<\kappa}, so the join {\bigvee Z} exists in {{\mathbb B}(\kappa,\lambda)} and agrees with the join in {{\mathbb B}(\theta,\lambda)}. It follows that {\bigvee Z\in \mathop{\mathbb P}_A} and, since {Z} was maximal, {\bigvee Z} must be the top condition of {\mathop{\mathbb P}_A}. Then {\bigvee Z\in G_A}, so we can use the {\kappa}-completeness of {U_A^\kappa} to find a {u\in Z\cap G_A}. \Box

Since {A} and {G_A} are interdefinable over {M} (we have just constructed {G_A} from {A} and {\mathop{\mathbb P}_A} and it is easy to read off {A} from {G_A}), this ultimately shows that {M(A)=M[G_A]} is a {\kappa}-cc generic extension of {M}, finishing the proof of step 1.

Joint Laver diamonds I

I firmly believe that when one is stuck on a research problem one should tell as many people as possible about it, because one of two things will happen: either someone will solve your problem (and you will have contributed to the store of mathematical knowledge) or you will frustrate all of your mathematician friends. Either of those is a good thing. For this reason I’ve decided to write a couple of posts sketching my current project and some of the points of frustration.

The fundamental idea arose from the following fact about Laver functions on a supercompact cardinal, which was given to me as an exercise at some point by Joel Hamkins:

Theorem. If \kappa is supercompact then there are 2^\kappa (the maximal possible number) many Laver functions \langle \ell_\alpha;\alpha<2^\kappa\rangle such that this sequence is jointly Laver, i.e. such that for any \theta and any sequence \langle x_\alpha;\alpha<2^\kappa\rangle of sets in H_{\theta^+} there is a \theta-supercompactness embedding j\colon V\to M with critical point \kappa which satisfies j(\ell_\alpha)(\kappa)=x_\alpha for all \alpha.

It is not difficult to see that this is true; furthermore, it will be interesting to give an alternative proof of the weaker claim that there is a jointly Laver sequence of length \kappa. This can be accomplished by simply coding everything appropriately. Specifically, start with a single Laver function \ell\colon \kappa\to V_\kappa and let \ell_\alpha(\xi)=\ell(\xi)(\alpha). Given a sequence \vec{x}=\langle x_\alpha;\alpha<\kappa\rangle in H_{\theta^+}, we fix a \theta-supercompactness embedding j such that j(\ell)(\kappa)=\vec{x}. It is then easy to check that j(\ell_\alpha)(\kappa)=x_\alpha for all \alpha.
For the 2^\kappa length case, we reindex our sequences to use elements of \mathcal{P}(\kappa) instead of 2^\kappa. Still working with a given Laver function \ell we define \ell_A(\xi)=\ell(\xi)(A\cap \xi) for A\in\mathcal{P}(\kappa). Given a sequence \vec{x}=\langle x_A;A\in\mathcal{P}(\kappa)\rangle in H_{\theta^+} we fix a (2^\kappa + \theta)-supercompactness embedding j such that j(\ell)(\kappa)=\vec{x} and check that this j makes everything work as required. If we had \theta<2^\kappa we should, at the end, also factor our a supercompactness embedding of the appropriate degree.

There are two things we should take away from this proof:

  1. The short case was fairly easy, requiring merely some coding. This suggests that whenever we have any Laver function-like object on a cardinal \kappa we should have \kappa many joint such objects, whatever that might mean;
  2. In the long case we seemingly only used the (2^\kappa+\theta)-supercompactness of \kappa. If we then consider only partially supercompact cardinals, this raises the question whether there is any strength in having a length 2^\kappa jointly Laver sequence or whether such things just always exist (provided there is an appropriate Laver function in the first place).

As alluded to in point 1, questions about jointly Laver sequences make sense whenever a Laver function-like object makes sense. This ties together nicely with the various Laver diamond principles, introduced for many large cardinals in an (as of yet unpublished) paper by Hamkins. Building on his work and also on the work of Apter-Cummings-Hamkins on the number of measures problem, some answers have been forthcoming.

Let me illustrate the main results about these joint Laver sequences in the case of measurable cardinals, where many of the interesting phenomena already occur. The supercompact case is fairly similar, with some complications.

To be concrete, if \kappa is measurable we call a function \ell\colon \kappa\to V_\kappaLaver diamond (for measurability) if for every x\in H_{\kappa^+} there is an elementary embedding j\colon V\to M with critical point \kappa such that j(\ell)(\kappa)=x. We call a sequence \langle \ell_\alpha;\alpha<\beta\ranglejoint Laver diamond sequence (for measurability) if for every sequence \langle x_\alpha;\alpha<\beta\rangle of sets in H_{\kappa^+} there is an elementary embedding j\colon V\to M with critical point \kappa such that j(\ell_\alpha)(\kappa)=x_\alpha for every \alpha.

Theorem. If \kappa is measurable and has a Laver diamond then it has a joint Laver diamond sequence of length \kappa. In general, if \kappa is measurable then there is a forcing extensions in which \kappa remains measurable and has a joint Laver diamond sequence of length \kappa.

This is quite simple. If there is a Laver diamond for \kappa then we can simply do the coding we did before and get a joint Laver diamond sequence. The point is that if there is no Laver diamond for \kappa we can always force to add one. This can be done in one of several (nonequivalent) ways, e.g. by Woodin’s fast function forcing or by first doing a preparatory forcing and then adding a Cohen subset to \kappa.

Theorem. If \kappa is measurable then there is a forcing extension in which \kappa remains measurable and has a joint Laver diamond sequence of length 2^\kappa.

This builds on the construction of adding a single Laver diamond. We first force the GCH to hold at \kappa if necessary. We next prepare by doing a Silver-style iteration up to \kappa where we add \gamma^+ many Cohen subsets to inaccessible \gamma. Finally we add \kappa^+ many Cohen subsets to \kappa. An argument as in the previous case shows that the Cohen subsets of \kappa can be decoded into a joint Laver diamond sequence, and since GCH still holds at \kappa at the end, there are 2^\kappa many. The crucial issue is showing that \kappa remains measurable after this forcing. The usual lifting argument via master conditions doesn’t work since the generic is too big to be distilled down to a master condition. To solve this we use what has been called in the literature the “master filter argument”, where instead of building a single master condition we build a descending sequence of partial master conditions, which encode larger and larger pieces of the generic. The construction is quite sensitive and exploits, among other things, the continuity of the embedding j at \kappa^+ (this becomes relevant in the supercompactness argument).

The fact that in the resulting model GCH holds in \kappa is unavoidable without stronger hypotheses. The following question is still open.

Question. Given a model where \kappa is measurable, GCH fails at \kappa and \kappa has a Laver diamond, is there a forcing extension preserving these facts where \kappa has a joint Laver diamond sequence of length 2^\kappa?

The final result on measurables is a separation of the conclusions of the previous two theorems. Therefore, while having a joint Laver diamond sequence of length \kappa is no weaker in consistency strength than having a joint Laver diamond sequence of length 2^\kappa, the outright implication still fails.

Theorem. If \kappa is measurable then there is a forcing extension in which \kappa remains measurable and has a joint Laver diamond sequence of length \kappa but no joint Laver diamond sequence of length \kappa^+.

The key observation here is that, in order to have a joint Laver diamond sequence of length \nu, there must be at least 2^\nu many normal measures on \kappa, since every binary \nu-sequence must be guessed by some embedding and, of course, each embedding corresponds to a single sequence. The argument now proceeds by first forcing to add a Laver diamond to \kappa as before and then using a result of Apter-Cummings-Hamkins by which we can force over a model with a measurable \kappa preserving measurability but making \kappa only carry \kappa^+ many normal measures. By our argument before \kappa cannot possibly have a joint Laver diamond sequence of length greater than \kappa. It then remains to check that the single Laver diamond survived this final forcing and this gives us a joint Laver diamond sequence of length \kappa as in our first theorem above.

The main gap in these results concerns the lack of control over 2^\kappa. One would like to be able to push 2^\kappa high and still talk about joint Laver diamond sequences of intermediate length. Of course, this requires higher consistency strength than merely measurability, but I would guess that we get equiconsistency at that level again.

Next time (whenever that might be) I will discuss similar results on (partially) supercompact cardinals and perhaps some others (like weakly compact or strong or strongly unfoldable).

Grounded Martin’s axiom

This is a short summary of some recent work on a principle I call the grounded Martin’s axiom. I gave a talk on this material in the CUNY Set Theory seminar a few days ago and a preprint will be available in the near future.

The grounded Martin’s axiom (or grMA) states that the universe V is a ccc forcing extension of some ground model W and that for any poset P\in W which is ccc in V and any collection \mathcal{D}\in V of less than continuum many dense subsets of P there is a \mathcal{D}-generic filter on P.

This concept appears naturally when one analyses the Solovay-Tennenbaum proof of the consistency of MA (with the continuum being a regular cardinal \kappa). There we iterate, in \kappa many steps, through all the available ccc posets of size <\kappa and use a suitable bookkeeping device to make sure that we have taken care of not only the posets in the ground model but also the posets that arise in all of the \kappa many intermediate models as well. This bookkeeping device (basically a bijection between \kappa and \kappa\times\kappa) will necessarily be wildly discontinuous and, in my opinion, distracts from the essence of the argument. Thus I have in the past suggested a reorganization of the proof which eliminates the need for (at least this part of the) bookkeeping by making the iteration slightly longer. Specifically, we construct a finite support iteration of length \kappa^2 as follows: starting in a suitable model (satisfying GCH or at least 2^{<\kappa}=\kappa) we iterate the \kappa many small ccc posets from this model, taking care to only take posets which remain ccc in the extension obtained so far; after the first \kappa many steps we repeat the process, considering now the small ccc posets in this extension. And we do it again and again, \kappa many times. The usual arguments show that what we get in the end is a model of MA and the continuum has size \kappa.

However, a new question now arises. Did we need to repeat this process \kappa many times? Did we need to repeat it at all? Might we already have MA after the first \kappa steps of the new iteration? The answer is no (assuming \kappa>\omega_1). To see why, notice that the forcing up to that point is an iteration of ground model posets, so it is basically a product. Since the forcing to add a single Cohen real will have inevitably appeared as a factor somewhere in this product, the model obtained is a Cohen extension of some intermediate model, but it is well known that MA fails in any Cohen extension where CH fails.

So MA fails in this model, but on the other hand, it looks perfectly crafted to satisfy grMA. Well, almost. What we have ensured by construction is that the restriction of grMA to posets of size less than \kappa holds. The same issue arises in the usual MA argument and an easy Löwenheim-Skolem argument shows that there the two versions are equivalent. We cannot simply transpose the argument to the present context since the appropriate elementary substructure of the poset is now in the wrong model, but fortunately a modification of the argument gives the analogous result for grMA.

Having now what might be called a canonical model of grMA, we can also determine some cardinal characteristics in this model. Since grMA clearly implies MA(Cohen) we must have \mathrm{cov}(\mathcal{B})=\mathfrak{c}, but since, as before, the model is obtained by adding \omega_1 many Cohen reals to an intermediate extension, we can also conclude that \mathrm{non}(\mathcal{B})=\omega_1 in this model. These two equalities now resolve the whole of Cichoń’s diagram and also show that grMA is less rigid than MA with respect to some of the smaller cardinal characteristics.

Another noteworthy observation is that, while MA implies that the continuum is regular, grMA is consistent with a singular continuum. In particular, it is possible in a model of grMA to have 2^{<\mathfrak{c}}>\mathfrak{c}, violating the generalized Luzin hypothesis. An interesting open question here is whether grMA implies that 2^{<\mathrm{cf}(\mathfrak{c})}=\mathfrak{c}. While this equality holds in the canonical model, I do not know whether it holds in general.

The remainder of the current results on grMA concern its robustness under forcing. It is known that MA is destroyed by very mild forcing, adding either a Cohen or a random real (assuming CH fails). At the same time some fragments of MA are known to be preserved by such forcing. To determine the behaviour of grMA under such forcing, a variation of termspace forcing was utilized.

Termspace forcing (due to Laver and possibly independently Woodin and other people) is a construction for taking a two step forcing iteration P\ast \dot{Q} and trying to approximate the poset named by \dot{Q} by a poset in the ground model. This gives the poset A(P,\dot{Q}), consisting of P-names which are forced by every condition to be in \dot{Q} and where \tau extends \sigma if this is forced by every condition. It can then be proved that forcing with A(P,\dot{Q}) adds a sort of doubly generic object for \dot{Q}. More precisely, forcing with A(P,\dot{Q}) gives a name which, when interpreted by any V-generic G for P, names a V[G]-generic for \dot{Q}^G. In particular, the iteration P\ast\dot{Q} embeds into the product A(P,\dot{Q})\times P.

The crucial issue, however, is that A(P,\dot{Q}) might not have any nontrivial chain conditions. This is clearly problematic for us, since we are dealing with an axiom that concerns only ccc posets. To fix this flaw we need to restrict the names we consider in the termspace forcing and for this purpose the notion of finite mixtures is introduced. A finite mixture is a P-name for an element of the ground model which is decided by some finite maximal antichain (the term finite mixture suggests that these names are obtained by applying the mixing lemma to finitely many check names). The subposet A_{\mathrm{fin}}(P,\dot{Q}) of A(P,\dot{Q}), consisting only of finite mixtures, has a much better chance of having a good chain condition. In particular, it can be seen that if P is just the forcing to add a single Cohen real, then A_{\mathrm{fin}}(P,Q) is Knaster if Q is (here Q is assumed to be in the ground model). This is the key step in showing that grMA is preserved by adding a single Cohen real (in fact it is preserved with respect to the same ground model). By slightly modifying the notion of a finite mixture to exploit the measure theory involved, a similar approach also shows that grMA is preserved by adding a random real (again, even with respect to the same ground model).

The question still remains whether grMA is preserved when adding more generic reals. For example, what happens if we add \omega_1 many Cohen reals? The methods used for a single real hinge on certain antichain refinement properties of the Cohen poset which are no longer there when adding more reals. Similar question can also be asked for random reals. In that case, at least, we do have an upper bound for preservation, as it is known that adding more than \mathfrak{c} many random reals will destroy MA(Cohen) and thus also grMA, but nothing is known about adding a smaller number.

Diamonds, clubs and sticks

I want to use this post to once and for all clear up any confusion I might have about what could be called the guessing principles of set theory. A secondary goal is to finally be able to claim that I have used the unique and amusing notation related to these principles.

I will restrict my attention to \omega_1; all of the principles generalize to larger cardinals (and restrict to stationary subsets etc.), but this basic case should be enough for an illustration.

So let’s start. I should first explain what I meant by “guessing principles”. Another fitting name would be “anticipatory principles”. Imagine a situation where one is performing an inductive construction of length \omega_1, with the goal being that the final object will satisfy some universal property. A reasonable strategy is then to diagonalize against all possible counterexamples while performing the construction. However, it might happen that there are simply too many possible counterexamples to deal with in \omega_1 many steps. Nevertheless, it is often the case that we do not in fact need the entire putative counterexample to prevent it becoming a true counterexample, but only some small fragment of it. If we can additionally ensure that these are never resurrected as possible counterexamples, our plan will go through. The only missing part is a coherent way of producing the fragments of possible counterexamples and here is where guessing principles come in.

The simplest guessing principle is \mathaccent\bullet\mid (called the stick principle). A \mathaccent\bullet\mid-sequence is a sequence \langle A_\alpha;\alpha<\omega_1\rangle of infinite subsets of \omega_1 such that every uncountable A\subseteq \omega_1 contains some A_\alpha. We say that \mathaccent\bullet\mid holds if there is a \mathaccent\bullet\mid-sequence.

It is easily seen that CH implies \mathaccent\bullet\mid. Indeed, CH implies that there are only \omega_1 many countable subsets of \omega_1, so all of these can be taken for our \mathaccent\bullet\mid-sequence. On the other hand, \mathrm{MA}_{\omega_1} implies \lnot\mathaccent\bullet\mid (the hypothesis here can be considerably weakened). To see this, let \langle A_\alpha;\alpha<\omega_1\rangle be a sequence of infinite subsets of \omega_1 and consider the poset \mathrm{Add}(\omega,\omega_1), seen as adding a subset of \omega_1. For any given \alpha it is dense in this poset that A_\alpha is not contained in the generic object and thus, by \mathrm{MA}_{\omega_1}, the sequence of the A_\alpha fails to be a \mathaccent\bullet\mid-sequence.

A stronger guessing principle is \clubsuit (called the club principle). A \clubsuit-sequence is a sequence of sets A_\alpha, indexed by limit ordinals \alpha<\omega_1, such that A_\alpha is a cofinal subset of \alpha and each uncountable subset of \omega_1 contains some s_\alpha. We say that \clubsuit holds if there is a \clubsuit-sequence.

Of course, \clubsuit implies \mathaccent\bullet\mid. On the other hand, since both \mathrm{CH}+\lnot\clubsuit and \lnot\mathrm{CH}+\clubsuit are consistent (as shown by Jensen and Shelah, respectively), \mathaccent\bullet\mid is in fact strictly weaker than \clubsuit.

We can, without too much effort, extract an apparently stronger formulation of \clubsuit. The claim is that a \clubsuit-sequence actually gets into any uncountable subset of \omega_1 stationarily often. To see this, let A\subseteq \omega_1 be uncountable and C\subseteq\omega_1 be club. By thinning out A if necessary, we can assume that any limit point of A is also a limit point of C, and is thus in C. Since the A_\alpha form a \clubsuit-sequence, we have A_\alpha\subseteq A for some \alpha. But then \alpha is a limit point of A and is in C.

The third and most renowned guessing principle is \diamondsuit (the diamond principle). A \diamondsuit-sequence is a sequence of sets \langle A_\alpha;\alpha<\omega_1\rangle such that A_\alpha\subseteq \alpha and for each A\subseteq\omega_1 we have A\cap\alpha=A_\alpha for stationarily many \alpha. We say that \diamondsuit holds if there is a \lozenge-sequence.

It is not difficult to see that \diamondsuit implies \clubsuit. In fact, these two principles are equivalent in the presence of CH. To see this, we let \langle A_\alpha;\alpha<\omega_1\rangle be an enumeration of all countable subsets of \omega_1 with cofinal repetition and define D_\alpha=\bigcup_{\gamma\in B_\alpha}A_\gamma, where B_\alpha is an element of the \clubsuit-sequence. One can then show fairly easily that the D_\alpha form a \diamondsuit-sequence.

There is a multitude of variations of \diamondsuit, where one is either allowed countably many guesses at each stage, or one attempts to guess club often or even guess club often while simultaneously guessing the club itself, but I think this short description will suffice for now.