Grounded Martin’s axiom

This is a short summary of some recent work on a principle I call the grounded Martin’s axiom. I gave a talk on this material in the CUNY Set Theory seminar a few days ago and a preprint will be available in the near future.

The grounded Martin’s axiom (or grMA) states that the universe V is a ccc forcing extension of some ground model W and that for any poset P\in W which is ccc in V and any collection \mathcal{D}\in V of less than continuum many dense subsets of P there is a \mathcal{D}-generic filter on P.

This concept appears naturally when one analyses the Solovay-Tennenbaum proof of the consistency of MA (with the continuum being a regular cardinal \kappa). There we iterate, in \kappa many steps, through all the available ccc posets of size <\kappa and use a suitable bookkeeping device to make sure that we have taken care of not only the posets in the ground model but also the posets that arise in all of the \kappa many intermediate models as well. This bookkeeping device (basically a bijection between \kappa and \kappa\times\kappa) will necessarily be wildly discontinuous and, in my opinion, distracts from the essence of the argument. Thus I have in the past suggested a reorganization of the proof which eliminates the need for (at least this part of the) bookkeeping by making the iteration slightly longer. Specifically, we construct a finite support iteration of length \kappa^2 as follows: starting in a suitable model (satisfying GCH or at least 2^{<\kappa}=\kappa) we iterate the \kappa many small ccc posets from this model, taking care to only take posets which remain ccc in the extension obtained so far; after the first \kappa many steps we repeat the process, considering now the small ccc posets in this extension. And we do it again and again, \kappa many times. The usual arguments show that what we get in the end is a model of MA and the continuum has size \kappa.

However, a new question now arises. Did we need to repeat this process \kappa many times? Did we need to repeat it at all? Might we already have MA after the first \kappa steps of the new iteration? The answer is no (assuming \kappa>\omega_1). To see why, notice that the forcing up to that point is an iteration of ground model posets, so it is basically a product. Since the forcing to add a single Cohen real will have inevitably appeared as a factor somewhere in this product, the model obtained is a Cohen extension of some intermediate model, but it is well known that MA fails in any Cohen extension where CH fails.

So MA fails in this model, but on the other hand, it looks perfectly crafted to satisfy grMA. Well, almost. What we have ensured by construction is that the restriction of grMA to posets of size less than \kappa holds. The same issue arises in the usual MA argument and an easy Löwenheim-Skolem argument shows that there the two versions are equivalent. We cannot simply transpose the argument to the present context since the appropriate elementary substructure of the poset is now in the wrong model, but fortunately a modification of the argument gives the analogous result for grMA.

Having now what might be called a canonical model of grMA, we can also determine some cardinal characteristics in this model. Since grMA clearly implies MA(Cohen) we must have \mathrm{cov}(\mathcal{B})=\mathfrak{c}, but since, as before, the model is obtained by adding \omega_1 many Cohen reals to an intermediate extension, we can also conclude that \mathrm{non}(\mathcal{B})=\omega_1 in this model. These two equalities now resolve the whole of Cichoń’s diagram and also show that grMA is less rigid than MA with respect to some of the smaller cardinal characteristics.

Another noteworthy observation is that, while MA implies that the continuum is regular, grMA is consistent with a singular continuum. In particular, it is possible in a model of grMA to have 2^{<\mathfrak{c}}>\mathfrak{c}, violating the generalized Luzin hypothesis. An interesting open question here is whether grMA implies that 2^{<\mathrm{cf}(\mathfrak{c})}=\mathfrak{c}. While this equality holds in the canonical model, I do not know whether it holds in general.

The remainder of the current results on grMA concern its robustness under forcing. It is known that MA is destroyed by very mild forcing, adding either a Cohen or a random real (assuming CH fails). At the same time some fragments of MA are known to be preserved by such forcing. To determine the behaviour of grMA under such forcing, a variation of termspace forcing was utilized.

Termspace forcing (due to Laver and possibly independently Woodin and other people) is a construction for taking a two step forcing iteration P\ast \dot{Q} and trying to approximate the poset named by \dot{Q} by a poset in the ground model. This gives the poset A(P,\dot{Q}), consisting of P-names which are forced by every condition to be in \dot{Q} and where \tau extends \sigma if this is forced by every condition. It can then be proved that forcing with A(P,\dot{Q}) adds a sort of doubly generic object for \dot{Q}. More precisely, forcing with A(P,\dot{Q}) gives a name which, when interpreted by any V-generic G for P, names a V[G]-generic for \dot{Q}^G. In particular, the iteration P\ast\dot{Q} embeds into the product A(P,\dot{Q})\times P.

The crucial issue, however, is that A(P,\dot{Q}) might not have any nontrivial chain conditions. This is clearly problematic for us, since we are dealing with an axiom that concerns only ccc posets. To fix this flaw we need to restrict the names we consider in the termspace forcing and for this purpose the notion of finite mixtures is introduced. A finite mixture is a P-name for an element of the ground model which is decided by some finite maximal antichain (the term finite mixture suggests that these names are obtained by applying the mixing lemma to finitely many check names). The subposet A_{\mathrm{fin}}(P,\dot{Q}) of A(P,\dot{Q}), consisting only of finite mixtures, has a much better chance of having a good chain condition. In particular, it can be seen that if P is just the forcing to add a single Cohen real, then A_{\mathrm{fin}}(P,Q) is Knaster if Q is (here Q is assumed to be in the ground model). This is the key step in showing that grMA is preserved by adding a single Cohen real (in fact it is preserved with respect to the same ground model). By slightly modifying the notion of a finite mixture to exploit the measure theory involved, a similar approach also shows that grMA is preserved by adding a random real (again, even with respect to the same ground model).

The question still remains whether grMA is preserved when adding more generic reals. For example, what happens if we add \omega_1 many Cohen reals? The methods used for a single real hinge on certain antichain refinement properties of the Cohen poset which are no longer there when adding more reals. Similar question can also be asked for random reals. In that case, at least, we do have an upper bound for preservation, as it is known that adding more than \mathfrak{c} many random reals will destroy MA(Cohen) and thus also grMA, but nothing is known about adding a smaller number.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s